2026-01-28 00:00:13.333025 | Job console starting 2026-01-28 00:00:13.352487 | Updating git repos 2026-01-28 00:00:13.447031 | Cloning repos into workspace 2026-01-28 00:00:13.959992 | Restoring repo states 2026-01-28 00:00:14.028147 | Merging changes 2026-01-28 00:00:14.028168 | Checking out repos 2026-01-28 00:00:14.563697 | Preparing playbooks 2026-01-28 00:00:15.840124 | Running Ansible setup 2026-01-28 00:00:24.624271 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-01-28 00:00:27.396376 | 2026-01-28 00:00:27.396508 | PLAY [Base pre] 2026-01-28 00:00:27.436999 | 2026-01-28 00:00:27.437124 | TASK [Setup log path fact] 2026-01-28 00:00:27.467415 | orchestrator | ok 2026-01-28 00:00:27.519091 | 2026-01-28 00:00:27.519234 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-28 00:00:27.568569 | orchestrator | ok 2026-01-28 00:00:27.590774 | 2026-01-28 00:00:27.590912 | TASK [emit-job-header : Print job information] 2026-01-28 00:00:27.660331 | # Job Information 2026-01-28 00:00:27.660490 | Ansible Version: 2.16.14 2026-01-28 00:00:27.660524 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-01-28 00:00:27.660557 | Pipeline: periodic-midnight 2026-01-28 00:00:27.660580 | Executor: 521e9411259a 2026-01-28 00:00:27.660600 | Triggered by: https://github.com/osism/testbed 2026-01-28 00:00:27.660622 | Event ID: e7df12b0c4c048fe8426a86be1c6b4f0 2026-01-28 00:00:27.671234 | 2026-01-28 00:00:27.671340 | LOOP [emit-job-header : Print node information] 2026-01-28 00:00:27.943856 | orchestrator | ok: 2026-01-28 00:00:27.943998 | orchestrator | # Node Information 2026-01-28 00:00:27.944027 | orchestrator | Inventory Hostname: orchestrator 2026-01-28 00:00:27.944047 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-01-28 00:00:27.944065 | orchestrator | Username: zuul-testbed02 2026-01-28 00:00:27.944082 | orchestrator | Distro: Debian 12.13 2026-01-28 00:00:27.944101 | orchestrator | Provider: static-testbed 2026-01-28 00:00:27.944119 | orchestrator | Region: 2026-01-28 00:00:27.944136 | orchestrator | Label: testbed-orchestrator 2026-01-28 00:00:27.944152 | orchestrator | Product Name: OpenStack Nova 2026-01-28 00:00:27.944169 | orchestrator | Interface IP: 81.163.193.140 2026-01-28 00:00:27.957347 | 2026-01-28 00:00:27.957484 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-01-28 00:00:29.364561 | orchestrator -> localhost | changed 2026-01-28 00:00:29.372264 | 2026-01-28 00:00:29.372366 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-01-28 00:00:31.959736 | orchestrator -> localhost | changed 2026-01-28 00:00:31.971141 | 2026-01-28 00:00:31.971241 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-01-28 00:00:32.916118 | orchestrator -> localhost | ok 2026-01-28 00:00:32.921956 | 2026-01-28 00:00:32.922063 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-01-28 00:00:32.969811 | orchestrator | ok 2026-01-28 00:00:33.004471 | orchestrator | included: /var/lib/zuul/builds/659fa68aa70e4b8f8d01f23a210e331e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-01-28 00:00:33.010818 | 2026-01-28 00:00:33.041300 | TASK [add-build-sshkey : Create Temp SSH key] 2026-01-28 00:00:37.318143 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-01-28 00:00:37.318307 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/659fa68aa70e4b8f8d01f23a210e331e/work/659fa68aa70e4b8f8d01f23a210e331e_id_rsa 2026-01-28 00:00:37.318340 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/659fa68aa70e4b8f8d01f23a210e331e/work/659fa68aa70e4b8f8d01f23a210e331e_id_rsa.pub 2026-01-28 00:00:37.318362 | orchestrator -> localhost | The key fingerprint is: 2026-01-28 00:00:37.318385 | orchestrator -> localhost | SHA256:WJFEcmSI6fC7i9olBSdLkOj/g3d5E1UyiYewSGWOY98 zuul-build-sshkey 2026-01-28 00:00:37.318404 | orchestrator -> localhost | The key's randomart image is: 2026-01-28 00:00:37.318435 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-01-28 00:00:37.318454 | orchestrator -> localhost | |o. +oOO.o . | 2026-01-28 00:00:37.318472 | orchestrator -> localhost | |o.. + *+o+ = . | 2026-01-28 00:00:37.318489 | orchestrator -> localhost | |. ++.= o. . + | 2026-01-28 00:00:37.318504 | orchestrator -> localhost | | o =+ oo. . | 2026-01-28 00:00:37.318520 | orchestrator -> localhost | | o ....SE. | 2026-01-28 00:00:37.318539 | orchestrator -> localhost | | o. . | 2026-01-28 00:00:37.318556 | orchestrator -> localhost | | . +. . . | 2026-01-28 00:00:37.318572 | orchestrator -> localhost | | . =.+ o o | 2026-01-28 00:00:37.318589 | orchestrator -> localhost | |..o o.o . . | 2026-01-28 00:00:37.318606 | orchestrator -> localhost | +----[SHA256]-----+ 2026-01-28 00:00:37.318645 | orchestrator -> localhost | ok: Runtime: 0:00:03.167375 2026-01-28 00:00:37.325157 | 2026-01-28 00:00:37.325239 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-01-28 00:00:37.372476 | orchestrator | ok 2026-01-28 00:00:37.390902 | orchestrator | included: /var/lib/zuul/builds/659fa68aa70e4b8f8d01f23a210e331e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-01-28 00:00:37.444905 | 2026-01-28 00:00:37.445005 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-01-28 00:00:37.483283 | orchestrator | skipping: Conditional result was False 2026-01-28 00:00:37.490819 | 2026-01-28 00:00:37.494158 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-01-28 00:00:38.226898 | orchestrator | changed 2026-01-28 00:00:38.232331 | 2026-01-28 00:00:38.232412 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-01-28 00:00:38.538947 | orchestrator | ok 2026-01-28 00:00:38.544132 | 2026-01-28 00:00:38.544219 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-01-28 00:00:39.011306 | orchestrator | ok 2026-01-28 00:00:39.033497 | 2026-01-28 00:00:39.033607 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-01-28 00:00:39.548464 | orchestrator | ok 2026-01-28 00:00:39.553353 | 2026-01-28 00:00:39.559278 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-01-28 00:00:39.602171 | orchestrator | skipping: Conditional result was False 2026-01-28 00:00:39.607745 | 2026-01-28 00:00:39.607858 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-01-28 00:00:40.517769 | orchestrator -> localhost | changed 2026-01-28 00:00:40.540590 | 2026-01-28 00:00:40.540687 | TASK [add-build-sshkey : Add back temp key] 2026-01-28 00:00:41.297062 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/659fa68aa70e4b8f8d01f23a210e331e/work/659fa68aa70e4b8f8d01f23a210e331e_id_rsa (zuul-build-sshkey) 2026-01-28 00:00:41.297321 | orchestrator -> localhost | ok: Runtime: 0:00:00.008929 2026-01-28 00:00:41.305576 | 2026-01-28 00:00:41.305668 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-01-28 00:00:42.086543 | orchestrator | ok 2026-01-28 00:00:42.091431 | 2026-01-28 00:00:42.091519 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-01-28 00:00:42.163854 | orchestrator | skipping: Conditional result was False 2026-01-28 00:00:42.305369 | 2026-01-28 00:00:42.305468 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-01-28 00:00:42.941190 | orchestrator | ok 2026-01-28 00:00:42.962318 | 2026-01-28 00:00:42.962415 | TASK [validate-host : Define zuul_info_dir fact] 2026-01-28 00:00:43.024514 | orchestrator | ok 2026-01-28 00:00:43.036917 | 2026-01-28 00:00:43.037008 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-01-28 00:00:43.726482 | orchestrator -> localhost | ok 2026-01-28 00:00:43.732507 | 2026-01-28 00:00:43.732596 | TASK [validate-host : Collect information about the host] 2026-01-28 00:00:45.036110 | orchestrator | ok 2026-01-28 00:00:45.077465 | 2026-01-28 00:00:45.077574 | TASK [validate-host : Sanitize hostname] 2026-01-28 00:00:45.170063 | orchestrator | ok 2026-01-28 00:00:45.174339 | 2026-01-28 00:00:45.174421 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-01-28 00:00:46.258002 | orchestrator -> localhost | changed 2026-01-28 00:00:46.269906 | 2026-01-28 00:00:46.270050 | TASK [validate-host : Collect information about zuul worker] 2026-01-28 00:00:46.949142 | orchestrator | ok 2026-01-28 00:00:46.971055 | 2026-01-28 00:00:46.971173 | TASK [validate-host : Write out all zuul information for each host] 2026-01-28 00:00:48.198488 | orchestrator -> localhost | changed 2026-01-28 00:00:48.221352 | 2026-01-28 00:00:48.222416 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-01-28 00:00:48.551214 | orchestrator | ok 2026-01-28 00:00:48.556136 | 2026-01-28 00:00:48.556214 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-01-28 00:02:05.838445 | orchestrator | changed: 2026-01-28 00:02:05.838762 | orchestrator | .d..t...... src/ 2026-01-28 00:02:05.838815 | orchestrator | .d..t...... src/github.com/ 2026-01-28 00:02:05.838947 | orchestrator | .d..t...... src/github.com/osism/ 2026-01-28 00:02:05.838980 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-01-28 00:02:05.839009 | orchestrator | RedHat.yml 2026-01-28 00:02:05.856017 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-01-28 00:02:05.856035 | orchestrator | RedHat.yml 2026-01-28 00:02:05.856087 | orchestrator | = 2.2.0"... 2026-01-28 00:02:18.744670 | orchestrator | - Finding latest version of hashicorp/null... 2026-01-28 00:02:18.761316 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-01-28 00:02:18.912843 | orchestrator | - Installing hashicorp/local v2.6.1... 2026-01-28 00:02:19.443279 | orchestrator | - Installed hashicorp/local v2.6.1 (signed, key ID 0C0AF313E5FD9F80) 2026-01-28 00:02:19.509542 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-01-28 00:02:19.917027 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-01-28 00:02:19.983630 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-01-28 00:02:20.748185 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-01-28 00:02:20.748247 | orchestrator | 2026-01-28 00:02:20.748254 | orchestrator | Providers are signed by their developers. 2026-01-28 00:02:20.748260 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-01-28 00:02:20.748265 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-01-28 00:02:20.748273 | orchestrator | 2026-01-28 00:02:20.748278 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-01-28 00:02:20.748288 | orchestrator | selections it made above. Include this file in your version control repository 2026-01-28 00:02:20.748292 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-01-28 00:02:20.748296 | orchestrator | you run "tofu init" in the future. 2026-01-28 00:02:20.748301 | orchestrator | 2026-01-28 00:02:20.748305 | orchestrator | OpenTofu has been successfully initialized! 2026-01-28 00:02:20.748309 | orchestrator | 2026-01-28 00:02:20.748313 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-01-28 00:02:20.748317 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-01-28 00:02:20.748321 | orchestrator | should now work. 2026-01-28 00:02:20.748325 | orchestrator | 2026-01-28 00:02:20.748329 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-01-28 00:02:20.748333 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-01-28 00:02:20.748337 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-01-28 00:02:20.933028 | orchestrator | Created and switched to workspace "ci"! 2026-01-28 00:02:20.933192 | orchestrator | 2026-01-28 00:02:20.933203 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-01-28 00:02:20.933210 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-01-28 00:02:20.933216 | orchestrator | for this configuration. 2026-01-28 00:02:21.118966 | orchestrator | ci.auto.tfvars 2026-01-28 00:02:21.432448 | orchestrator | default_custom.tf 2026-01-28 00:02:24.050314 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-01-28 00:02:24.606397 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-01-28 00:02:24.873821 | orchestrator | 2026-01-28 00:02:24.873890 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-01-28 00:02:24.873898 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-01-28 00:02:24.873903 | orchestrator | + create 2026-01-28 00:02:24.873909 | orchestrator | <= read (data resources) 2026-01-28 00:02:24.873914 | orchestrator | 2026-01-28 00:02:24.873919 | orchestrator | OpenTofu will perform the following actions: 2026-01-28 00:02:24.873931 | orchestrator | 2026-01-28 00:02:24.873936 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-01-28 00:02:24.873941 | orchestrator | # (config refers to values not yet known) 2026-01-28 00:02:24.873946 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-01-28 00:02:24.873951 | orchestrator | + checksum = (known after apply) 2026-01-28 00:02:24.873956 | orchestrator | + created_at = (known after apply) 2026-01-28 00:02:24.873961 | orchestrator | + file = (known after apply) 2026-01-28 00:02:24.873969 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.873991 | orchestrator | + metadata = (known after apply) 2026-01-28 00:02:24.873995 | orchestrator | + min_disk_gb = (known after apply) 2026-01-28 00:02:24.873999 | orchestrator | + min_ram_mb = (known after apply) 2026-01-28 00:02:24.874003 | orchestrator | + most_recent = true 2026-01-28 00:02:24.874007 | orchestrator | + name = (known after apply) 2026-01-28 00:02:24.874032 | orchestrator | + protected = (known after apply) 2026-01-28 00:02:24.874037 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.874043 | orchestrator | + schema = (known after apply) 2026-01-28 00:02:24.874047 | orchestrator | + size_bytes = (known after apply) 2026-01-28 00:02:24.874051 | orchestrator | + tags = (known after apply) 2026-01-28 00:02:24.874055 | orchestrator | + updated_at = (known after apply) 2026-01-28 00:02:24.874059 | orchestrator | } 2026-01-28 00:02:24.874065 | orchestrator | 2026-01-28 00:02:24.874069 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-01-28 00:02:24.874073 | orchestrator | # (config refers to values not yet known) 2026-01-28 00:02:24.874077 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-01-28 00:02:24.874081 | orchestrator | + checksum = (known after apply) 2026-01-28 00:02:24.874085 | orchestrator | + created_at = (known after apply) 2026-01-28 00:02:24.874089 | orchestrator | + file = (known after apply) 2026-01-28 00:02:24.874092 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.874096 | orchestrator | + metadata = (known after apply) 2026-01-28 00:02:24.874100 | orchestrator | + min_disk_gb = (known after apply) 2026-01-28 00:02:24.874104 | orchestrator | + min_ram_mb = (known after apply) 2026-01-28 00:02:24.874108 | orchestrator | + most_recent = true 2026-01-28 00:02:24.874111 | orchestrator | + name = (known after apply) 2026-01-28 00:02:24.874115 | orchestrator | + protected = (known after apply) 2026-01-28 00:02:24.874119 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.874123 | orchestrator | + schema = (known after apply) 2026-01-28 00:02:24.874126 | orchestrator | + size_bytes = (known after apply) 2026-01-28 00:02:24.874130 | orchestrator | + tags = (known after apply) 2026-01-28 00:02:24.874134 | orchestrator | + updated_at = (known after apply) 2026-01-28 00:02:24.874138 | orchestrator | } 2026-01-28 00:02:24.874143 | orchestrator | 2026-01-28 00:02:24.874147 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-01-28 00:02:24.874151 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-01-28 00:02:24.874155 | orchestrator | + content = (known after apply) 2026-01-28 00:02:24.874159 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-28 00:02:24.874163 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-28 00:02:24.874167 | orchestrator | + content_md5 = (known after apply) 2026-01-28 00:02:24.874170 | orchestrator | + content_sha1 = (known after apply) 2026-01-28 00:02:24.874175 | orchestrator | + content_sha256 = (known after apply) 2026-01-28 00:02:24.874181 | orchestrator | + content_sha512 = (known after apply) 2026-01-28 00:02:24.874187 | orchestrator | + directory_permission = "0777" 2026-01-28 00:02:24.874194 | orchestrator | + file_permission = "0644" 2026-01-28 00:02:24.874204 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-01-28 00:02:24.874210 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.874216 | orchestrator | } 2026-01-28 00:02:24.874224 | orchestrator | 2026-01-28 00:02:24.874231 | orchestrator | # local_file.id_rsa_pub will be created 2026-01-28 00:02:24.874237 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-01-28 00:02:24.874243 | orchestrator | + content = (known after apply) 2026-01-28 00:02:24.874249 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-28 00:02:24.874256 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-28 00:02:24.874263 | orchestrator | + content_md5 = (known after apply) 2026-01-28 00:02:24.874270 | orchestrator | + content_sha1 = (known after apply) 2026-01-28 00:02:24.874276 | orchestrator | + content_sha256 = (known after apply) 2026-01-28 00:02:24.874290 | orchestrator | + content_sha512 = (known after apply) 2026-01-28 00:02:24.874297 | orchestrator | + directory_permission = "0777" 2026-01-28 00:02:24.874303 | orchestrator | + file_permission = "0644" 2026-01-28 00:02:24.874315 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-01-28 00:02:24.874322 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.874329 | orchestrator | } 2026-01-28 00:02:24.874338 | orchestrator | 2026-01-28 00:02:24.874344 | orchestrator | # local_file.inventory will be created 2026-01-28 00:02:24.874351 | orchestrator | + resource "local_file" "inventory" { 2026-01-28 00:02:24.874357 | orchestrator | + content = (known after apply) 2026-01-28 00:02:24.874364 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-28 00:02:24.874371 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-28 00:02:24.874377 | orchestrator | + content_md5 = (known after apply) 2026-01-28 00:02:24.874384 | orchestrator | + content_sha1 = (known after apply) 2026-01-28 00:02:24.874391 | orchestrator | + content_sha256 = (known after apply) 2026-01-28 00:02:24.874397 | orchestrator | + content_sha512 = (known after apply) 2026-01-28 00:02:24.874404 | orchestrator | + directory_permission = "0777" 2026-01-28 00:02:24.874410 | orchestrator | + file_permission = "0644" 2026-01-28 00:02:24.874417 | orchestrator | + filename = "inventory.ci" 2026-01-28 00:02:24.874424 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.874430 | orchestrator | } 2026-01-28 00:02:24.874439 | orchestrator | 2026-01-28 00:02:24.874446 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-01-28 00:02:24.874452 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-01-28 00:02:24.874486 | orchestrator | + content = (sensitive value) 2026-01-28 00:02:24.874493 | orchestrator | + content_base64sha256 = (known after apply) 2026-01-28 00:02:24.874500 | orchestrator | + content_base64sha512 = (known after apply) 2026-01-28 00:02:24.874507 | orchestrator | + content_md5 = (known after apply) 2026-01-28 00:02:24.874513 | orchestrator | + content_sha1 = (known after apply) 2026-01-28 00:02:24.874520 | orchestrator | + content_sha256 = (known after apply) 2026-01-28 00:02:24.874527 | orchestrator | + content_sha512 = (known after apply) 2026-01-28 00:02:24.874534 | orchestrator | + directory_permission = "0700" 2026-01-28 00:02:24.874540 | orchestrator | + file_permission = "0600" 2026-01-28 00:02:24.874547 | orchestrator | + filename = ".id_rsa.ci" 2026-01-28 00:02:24.874554 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.874561 | orchestrator | } 2026-01-28 00:02:24.874568 | orchestrator | 2026-01-28 00:02:24.874575 | orchestrator | # null_resource.node_semaphore will be created 2026-01-28 00:02:24.874581 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-01-28 00:02:24.874588 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.874595 | orchestrator | } 2026-01-28 00:02:24.874601 | orchestrator | 2026-01-28 00:02:24.874608 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-01-28 00:02:24.874615 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-01-28 00:02:24.874621 | orchestrator | + attachment = (known after apply) 2026-01-28 00:02:24.874627 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.874634 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.874641 | orchestrator | + image_id = (known after apply) 2026-01-28 00:02:24.874647 | orchestrator | + metadata = (known after apply) 2026-01-28 00:02:24.874654 | orchestrator | + name = "testbed-volume-manager-base" 2026-01-28 00:02:24.874661 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.874667 | orchestrator | + size = 80 2026-01-28 00:02:24.874674 | orchestrator | + volume_retype_policy = "never" 2026-01-28 00:02:24.874681 | orchestrator | + volume_type = "ssd" 2026-01-28 00:02:24.874688 | orchestrator | } 2026-01-28 00:02:24.874697 | orchestrator | 2026-01-28 00:02:24.874703 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-01-28 00:02:24.874709 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-28 00:02:24.874715 | orchestrator | + attachment = (known after apply) 2026-01-28 00:02:24.874722 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.874729 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.874740 | orchestrator | + image_id = (known after apply) 2026-01-28 00:02:24.874747 | orchestrator | + metadata = (known after apply) 2026-01-28 00:02:24.874754 | orchestrator | + name = "testbed-volume-0-node-base" 2026-01-28 00:02:24.874761 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.874767 | orchestrator | + size = 80 2026-01-28 00:02:24.874774 | orchestrator | + volume_retype_policy = "never" 2026-01-28 00:02:24.874781 | orchestrator | + volume_type = "ssd" 2026-01-28 00:02:24.874788 | orchestrator | } 2026-01-28 00:02:24.874794 | orchestrator | 2026-01-28 00:02:24.874802 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-01-28 00:02:24.874808 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-28 00:02:24.874815 | orchestrator | + attachment = (known after apply) 2026-01-28 00:02:24.874822 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.874828 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.874835 | orchestrator | + image_id = (known after apply) 2026-01-28 00:02:24.874842 | orchestrator | + metadata = (known after apply) 2026-01-28 00:02:24.874849 | orchestrator | + name = "testbed-volume-1-node-base" 2026-01-28 00:02:24.874856 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.874863 | orchestrator | + size = 80 2026-01-28 00:02:24.874869 | orchestrator | + volume_retype_policy = "never" 2026-01-28 00:02:24.874876 | orchestrator | + volume_type = "ssd" 2026-01-28 00:02:24.874883 | orchestrator | } 2026-01-28 00:02:24.874890 | orchestrator | 2026-01-28 00:02:24.874897 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-01-28 00:02:24.874903 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-28 00:02:24.874909 | orchestrator | + attachment = (known after apply) 2026-01-28 00:02:24.874916 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.874923 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.874930 | orchestrator | + image_id = (known after apply) 2026-01-28 00:02:24.874936 | orchestrator | + metadata = (known after apply) 2026-01-28 00:02:24.874943 | orchestrator | + name = "testbed-volume-2-node-base" 2026-01-28 00:02:24.874950 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.874957 | orchestrator | + size = 80 2026-01-28 00:02:24.874971 | orchestrator | + volume_retype_policy = "never" 2026-01-28 00:02:24.874978 | orchestrator | + volume_type = "ssd" 2026-01-28 00:02:24.874984 | orchestrator | } 2026-01-28 00:02:24.874993 | orchestrator | 2026-01-28 00:02:24.874999 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-01-28 00:02:24.875006 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-28 00:02:24.875012 | orchestrator | + attachment = (known after apply) 2026-01-28 00:02:24.875019 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.875026 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.875033 | orchestrator | + image_id = (known after apply) 2026-01-28 00:02:24.875039 | orchestrator | + metadata = (known after apply) 2026-01-28 00:02:24.875046 | orchestrator | + name = "testbed-volume-3-node-base" 2026-01-28 00:02:24.875052 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.875058 | orchestrator | + size = 80 2026-01-28 00:02:24.875065 | orchestrator | + volume_retype_policy = "never" 2026-01-28 00:02:24.875071 | orchestrator | + volume_type = "ssd" 2026-01-28 00:02:24.875078 | orchestrator | } 2026-01-28 00:02:24.875085 | orchestrator | 2026-01-28 00:02:24.875091 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-01-28 00:02:24.875097 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-28 00:02:24.875103 | orchestrator | + attachment = (known after apply) 2026-01-28 00:02:24.875109 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.875116 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.875127 | orchestrator | + image_id = (known after apply) 2026-01-28 00:02:24.875133 | orchestrator | + metadata = (known after apply) 2026-01-28 00:02:24.875139 | orchestrator | + name = "testbed-volume-4-node-base" 2026-01-28 00:02:24.875145 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.875150 | orchestrator | + size = 80 2026-01-28 00:02:24.875157 | orchestrator | + volume_retype_policy = "never" 2026-01-28 00:02:24.875164 | orchestrator | + volume_type = "ssd" 2026-01-28 00:02:24.875171 | orchestrator | } 2026-01-28 00:02:24.875177 | orchestrator | 2026-01-28 00:02:24.875184 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-01-28 00:02:24.875190 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-01-28 00:02:24.875196 | orchestrator | + attachment = (known after apply) 2026-01-28 00:02:24.875202 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.875208 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.875215 | orchestrator | + image_id = (known after apply) 2026-01-28 00:02:24.875221 | orchestrator | + metadata = (known after apply) 2026-01-28 00:02:24.875229 | orchestrator | + name = "testbed-volume-5-node-base" 2026-01-28 00:02:24.875235 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.875241 | orchestrator | + size = 80 2026-01-28 00:02:24.875248 | orchestrator | + volume_retype_policy = "never" 2026-01-28 00:02:24.875254 | orchestrator | + volume_type = "ssd" 2026-01-28 00:02:24.875261 | orchestrator | } 2026-01-28 00:02:24.875267 | orchestrator | 2026-01-28 00:02:24.875274 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-01-28 00:02:24.875280 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-28 00:02:24.875286 | orchestrator | + attachment = (known after apply) 2026-01-28 00:02:24.875292 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.875298 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.875304 | orchestrator | + metadata = (known after apply) 2026-01-28 00:02:24.875310 | orchestrator | + name = "testbed-volume-0-node-3" 2026-01-28 00:02:24.875316 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.875323 | orchestrator | + size = 20 2026-01-28 00:02:24.875329 | orchestrator | + volume_retype_policy = "never" 2026-01-28 00:02:24.875336 | orchestrator | + volume_type = "ssd" 2026-01-28 00:02:24.875343 | orchestrator | } 2026-01-28 00:02:24.875353 | orchestrator | 2026-01-28 00:02:24.875359 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-01-28 00:02:24.875365 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-28 00:02:24.875371 | orchestrator | + attachment = (known after apply) 2026-01-28 00:02:24.875377 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.875383 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.875389 | orchestrator | + metadata = (known after apply) 2026-01-28 00:02:24.875395 | orchestrator | + name = "testbed-volume-1-node-4" 2026-01-28 00:02:24.875401 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.875406 | orchestrator | + size = 20 2026-01-28 00:02:24.875413 | orchestrator | + volume_retype_policy = "never" 2026-01-28 00:02:24.875420 | orchestrator | + volume_type = "ssd" 2026-01-28 00:02:24.875426 | orchestrator | } 2026-01-28 00:02:24.875434 | orchestrator | 2026-01-28 00:02:24.875440 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-01-28 00:02:24.875446 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-28 00:02:24.875452 | orchestrator | + attachment = (known after apply) 2026-01-28 00:02:24.875475 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.875482 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.875489 | orchestrator | + metadata = (known after apply) 2026-01-28 00:02:24.875495 | orchestrator | + name = "testbed-volume-2-node-5" 2026-01-28 00:02:24.875502 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.875514 | orchestrator | + size = 20 2026-01-28 00:02:24.875520 | orchestrator | + volume_retype_policy = "never" 2026-01-28 00:02:24.875527 | orchestrator | + volume_type = "ssd" 2026-01-28 00:02:24.875534 | orchestrator | } 2026-01-28 00:02:24.875540 | orchestrator | 2026-01-28 00:02:24.875547 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-01-28 00:02:24.875554 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-28 00:02:24.875561 | orchestrator | + attachment = (known after apply) 2026-01-28 00:02:24.875568 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.875576 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.875586 | orchestrator | + metadata = (known after apply) 2026-01-28 00:02:24.875593 | orchestrator | + name = "testbed-volume-3-node-3" 2026-01-28 00:02:24.875600 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.875607 | orchestrator | + size = 20 2026-01-28 00:02:24.875614 | orchestrator | + volume_retype_policy = "never" 2026-01-28 00:02:24.875620 | orchestrator | + volume_type = "ssd" 2026-01-28 00:02:24.875628 | orchestrator | } 2026-01-28 00:02:24.875634 | orchestrator | 2026-01-28 00:02:24.875640 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-01-28 00:02:24.875647 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-28 00:02:24.875653 | orchestrator | + attachment = (known after apply) 2026-01-28 00:02:24.875660 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.875667 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.875674 | orchestrator | + metadata = (known after apply) 2026-01-28 00:02:24.875681 | orchestrator | + name = "testbed-volume-4-node-4" 2026-01-28 00:02:24.875687 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.875692 | orchestrator | + size = 20 2026-01-28 00:02:24.875698 | orchestrator | + volume_retype_policy = "never" 2026-01-28 00:02:24.875704 | orchestrator | + volume_type = "ssd" 2026-01-28 00:02:24.875710 | orchestrator | } 2026-01-28 00:02:24.875717 | orchestrator | 2026-01-28 00:02:24.875724 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-01-28 00:02:24.875730 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-28 00:02:24.875736 | orchestrator | + attachment = (known after apply) 2026-01-28 00:02:24.875743 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.875750 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.875757 | orchestrator | + metadata = (known after apply) 2026-01-28 00:02:24.875764 | orchestrator | + name = "testbed-volume-5-node-5" 2026-01-28 00:02:24.875770 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.875777 | orchestrator | + size = 20 2026-01-28 00:02:24.875784 | orchestrator | + volume_retype_policy = "never" 2026-01-28 00:02:24.875791 | orchestrator | + volume_type = "ssd" 2026-01-28 00:02:24.875798 | orchestrator | } 2026-01-28 00:02:24.875804 | orchestrator | 2026-01-28 00:02:24.875810 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-01-28 00:02:24.875816 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-28 00:02:24.875823 | orchestrator | + attachment = (known after apply) 2026-01-28 00:02:24.875830 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.875838 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.875845 | orchestrator | + metadata = (known after apply) 2026-01-28 00:02:24.875851 | orchestrator | + name = "testbed-volume-6-node-3" 2026-01-28 00:02:24.875858 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.875864 | orchestrator | + size = 20 2026-01-28 00:02:24.875870 | orchestrator | + volume_retype_policy = "never" 2026-01-28 00:02:24.875877 | orchestrator | + volume_type = "ssd" 2026-01-28 00:02:24.875884 | orchestrator | } 2026-01-28 00:02:24.875896 | orchestrator | 2026-01-28 00:02:24.875903 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-01-28 00:02:24.875909 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-28 00:02:24.875922 | orchestrator | + attachment = (known after apply) 2026-01-28 00:02:24.875929 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.875935 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.875942 | orchestrator | + metadata = (known after apply) 2026-01-28 00:02:24.875948 | orchestrator | + name = "testbed-volume-7-node-4" 2026-01-28 00:02:24.875955 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.875961 | orchestrator | + size = 20 2026-01-28 00:02:24.875968 | orchestrator | + volume_retype_policy = "never" 2026-01-28 00:02:24.875974 | orchestrator | + volume_type = "ssd" 2026-01-28 00:02:24.875981 | orchestrator | } 2026-01-28 00:02:24.875988 | orchestrator | 2026-01-28 00:02:24.875996 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-01-28 00:02:24.876003 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-01-28 00:02:24.876009 | orchestrator | + attachment = (known after apply) 2026-01-28 00:02:24.876016 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.876022 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.876029 | orchestrator | + metadata = (known after apply) 2026-01-28 00:02:24.876035 | orchestrator | + name = "testbed-volume-8-node-5" 2026-01-28 00:02:24.876042 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.876049 | orchestrator | + size = 20 2026-01-28 00:02:24.876055 | orchestrator | + volume_retype_policy = "never" 2026-01-28 00:02:24.876061 | orchestrator | + volume_type = "ssd" 2026-01-28 00:02:24.876068 | orchestrator | } 2026-01-28 00:02:24.876075 | orchestrator | 2026-01-28 00:02:24.876082 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-01-28 00:02:24.876089 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-01-28 00:02:24.876095 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-28 00:02:24.876103 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-28 00:02:24.876110 | orchestrator | + all_metadata = (known after apply) 2026-01-28 00:02:24.876116 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.876123 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.876130 | orchestrator | + config_drive = true 2026-01-28 00:02:24.876139 | orchestrator | + created = (known after apply) 2026-01-28 00:02:24.876146 | orchestrator | + flavor_id = (known after apply) 2026-01-28 00:02:24.876153 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-01-28 00:02:24.876161 | orchestrator | + force_delete = false 2026-01-28 00:02:24.876168 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-28 00:02:24.876175 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.876181 | orchestrator | + image_id = (known after apply) 2026-01-28 00:02:24.876188 | orchestrator | + image_name = (known after apply) 2026-01-28 00:02:24.876196 | orchestrator | + key_pair = "testbed" 2026-01-28 00:02:24.876202 | orchestrator | + name = "testbed-manager" 2026-01-28 00:02:24.876209 | orchestrator | + power_state = "active" 2026-01-28 00:02:24.876215 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.876221 | orchestrator | + security_groups = (known after apply) 2026-01-28 00:02:24.876228 | orchestrator | + stop_before_destroy = false 2026-01-28 00:02:24.876234 | orchestrator | + updated = (known after apply) 2026-01-28 00:02:24.876241 | orchestrator | + user_data = (sensitive value) 2026-01-28 00:02:24.876247 | orchestrator | 2026-01-28 00:02:24.876254 | orchestrator | + block_device { 2026-01-28 00:02:24.876260 | orchestrator | + boot_index = 0 2026-01-28 00:02:24.876267 | orchestrator | + delete_on_termination = false 2026-01-28 00:02:24.876274 | orchestrator | + destination_type = "volume" 2026-01-28 00:02:24.876280 | orchestrator | + multiattach = false 2026-01-28 00:02:24.876286 | orchestrator | + source_type = "volume" 2026-01-28 00:02:24.876292 | orchestrator | + uuid = (known after apply) 2026-01-28 00:02:24.876304 | orchestrator | } 2026-01-28 00:02:24.876310 | orchestrator | 2026-01-28 00:02:24.876317 | orchestrator | + network { 2026-01-28 00:02:24.876323 | orchestrator | + access_network = false 2026-01-28 00:02:24.876329 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-28 00:02:24.876335 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-28 00:02:24.876341 | orchestrator | + mac = (known after apply) 2026-01-28 00:02:24.876348 | orchestrator | + name = (known after apply) 2026-01-28 00:02:24.876354 | orchestrator | + port = (known after apply) 2026-01-28 00:02:24.876360 | orchestrator | + uuid = (known after apply) 2026-01-28 00:02:24.876366 | orchestrator | } 2026-01-28 00:02:24.876372 | orchestrator | } 2026-01-28 00:02:24.876383 | orchestrator | 2026-01-28 00:02:24.876389 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-01-28 00:02:24.876395 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-28 00:02:24.876401 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-28 00:02:24.876407 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-28 00:02:24.876413 | orchestrator | + all_metadata = (known after apply) 2026-01-28 00:02:24.876419 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.876425 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.876431 | orchestrator | + config_drive = true 2026-01-28 00:02:24.876437 | orchestrator | + created = (known after apply) 2026-01-28 00:02:24.876443 | orchestrator | + flavor_id = (known after apply) 2026-01-28 00:02:24.876449 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-28 00:02:24.876478 | orchestrator | + force_delete = false 2026-01-28 00:02:24.876485 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-28 00:02:24.876491 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.876497 | orchestrator | + image_id = (known after apply) 2026-01-28 00:02:24.876503 | orchestrator | + image_name = (known after apply) 2026-01-28 00:02:24.876509 | orchestrator | + key_pair = "testbed" 2026-01-28 00:02:24.876515 | orchestrator | + name = "testbed-node-0" 2026-01-28 00:02:24.876519 | orchestrator | + power_state = "active" 2026-01-28 00:02:24.876522 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.876526 | orchestrator | + security_groups = (known after apply) 2026-01-28 00:02:24.876530 | orchestrator | + stop_before_destroy = false 2026-01-28 00:02:24.876534 | orchestrator | + updated = (known after apply) 2026-01-28 00:02:24.876538 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-28 00:02:24.876542 | orchestrator | 2026-01-28 00:02:24.876545 | orchestrator | + block_device { 2026-01-28 00:02:24.876549 | orchestrator | + boot_index = 0 2026-01-28 00:02:24.876553 | orchestrator | + delete_on_termination = false 2026-01-28 00:02:24.876557 | orchestrator | + destination_type = "volume" 2026-01-28 00:02:24.876560 | orchestrator | + multiattach = false 2026-01-28 00:02:24.876564 | orchestrator | + source_type = "volume" 2026-01-28 00:02:24.876568 | orchestrator | + uuid = (known after apply) 2026-01-28 00:02:24.876572 | orchestrator | } 2026-01-28 00:02:24.876576 | orchestrator | 2026-01-28 00:02:24.876579 | orchestrator | + network { 2026-01-28 00:02:24.876583 | orchestrator | + access_network = false 2026-01-28 00:02:24.876587 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-28 00:02:24.876590 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-28 00:02:24.876594 | orchestrator | + mac = (known after apply) 2026-01-28 00:02:24.876598 | orchestrator | + name = (known after apply) 2026-01-28 00:02:24.876602 | orchestrator | + port = (known after apply) 2026-01-28 00:02:24.876606 | orchestrator | + uuid = (known after apply) 2026-01-28 00:02:24.876609 | orchestrator | } 2026-01-28 00:02:24.876613 | orchestrator | } 2026-01-28 00:02:24.876617 | orchestrator | 2026-01-28 00:02:24.876621 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-01-28 00:02:24.876624 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-28 00:02:24.876628 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-28 00:02:24.876636 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-28 00:02:24.876640 | orchestrator | + all_metadata = (known after apply) 2026-01-28 00:02:24.876643 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.876647 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.876651 | orchestrator | + config_drive = true 2026-01-28 00:02:24.876654 | orchestrator | + created = (known after apply) 2026-01-28 00:02:24.876658 | orchestrator | + flavor_id = (known after apply) 2026-01-28 00:02:24.876662 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-28 00:02:24.876666 | orchestrator | + force_delete = false 2026-01-28 00:02:24.876669 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-28 00:02:24.876673 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.876677 | orchestrator | + image_id = (known after apply) 2026-01-28 00:02:24.876681 | orchestrator | + image_name = (known after apply) 2026-01-28 00:02:24.876684 | orchestrator | + key_pair = "testbed" 2026-01-28 00:02:24.876688 | orchestrator | + name = "testbed-node-1" 2026-01-28 00:02:24.876692 | orchestrator | + power_state = "active" 2026-01-28 00:02:24.876696 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.876699 | orchestrator | + security_groups = (known after apply) 2026-01-28 00:02:24.876703 | orchestrator | + stop_before_destroy = false 2026-01-28 00:02:24.876707 | orchestrator | + updated = (known after apply) 2026-01-28 00:02:24.876714 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-28 00:02:24.876718 | orchestrator | 2026-01-28 00:02:24.876722 | orchestrator | + block_device { 2026-01-28 00:02:24.876725 | orchestrator | + boot_index = 0 2026-01-28 00:02:24.876729 | orchestrator | + delete_on_termination = false 2026-01-28 00:02:24.876733 | orchestrator | + destination_type = "volume" 2026-01-28 00:02:24.876737 | orchestrator | + multiattach = false 2026-01-28 00:02:24.876740 | orchestrator | + source_type = "volume" 2026-01-28 00:02:24.876744 | orchestrator | + uuid = (known after apply) 2026-01-28 00:02:24.876748 | orchestrator | } 2026-01-28 00:02:24.876752 | orchestrator | 2026-01-28 00:02:24.876755 | orchestrator | + network { 2026-01-28 00:02:24.876759 | orchestrator | + access_network = false 2026-01-28 00:02:24.876763 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-28 00:02:24.876767 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-28 00:02:24.876770 | orchestrator | + mac = (known after apply) 2026-01-28 00:02:24.876774 | orchestrator | + name = (known after apply) 2026-01-28 00:02:24.876778 | orchestrator | + port = (known after apply) 2026-01-28 00:02:24.876782 | orchestrator | + uuid = (known after apply) 2026-01-28 00:02:24.876786 | orchestrator | } 2026-01-28 00:02:24.876789 | orchestrator | } 2026-01-28 00:02:24.876796 | orchestrator | 2026-01-28 00:02:24.876800 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-01-28 00:02:24.876803 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-28 00:02:24.876807 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-28 00:02:24.876811 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-28 00:02:24.876816 | orchestrator | + all_metadata = (known after apply) 2026-01-28 00:02:24.876820 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.876823 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.876827 | orchestrator | + config_drive = true 2026-01-28 00:02:24.876831 | orchestrator | + created = (known after apply) 2026-01-28 00:02:24.876835 | orchestrator | + flavor_id = (known after apply) 2026-01-28 00:02:24.876839 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-28 00:02:24.876842 | orchestrator | + force_delete = false 2026-01-28 00:02:24.876846 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-28 00:02:24.876850 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.876853 | orchestrator | + image_id = (known after apply) 2026-01-28 00:02:24.876860 | orchestrator | + image_name = (known after apply) 2026-01-28 00:02:24.876864 | orchestrator | + key_pair = "testbed" 2026-01-28 00:02:24.876868 | orchestrator | + name = "testbed-node-2" 2026-01-28 00:02:24.876872 | orchestrator | + power_state = "active" 2026-01-28 00:02:24.876875 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.876879 | orchestrator | + security_groups = (known after apply) 2026-01-28 00:02:24.876883 | orchestrator | + stop_before_destroy = false 2026-01-28 00:02:24.876887 | orchestrator | + updated = (known after apply) 2026-01-28 00:02:24.876890 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-28 00:02:24.876894 | orchestrator | 2026-01-28 00:02:24.876898 | orchestrator | + block_device { 2026-01-28 00:02:24.876902 | orchestrator | + boot_index = 0 2026-01-28 00:02:24.876905 | orchestrator | + delete_on_termination = false 2026-01-28 00:02:24.876909 | orchestrator | + destination_type = "volume" 2026-01-28 00:02:24.876913 | orchestrator | + multiattach = false 2026-01-28 00:02:24.876917 | orchestrator | + source_type = "volume" 2026-01-28 00:02:24.876920 | orchestrator | + uuid = (known after apply) 2026-01-28 00:02:24.876924 | orchestrator | } 2026-01-28 00:02:24.876928 | orchestrator | 2026-01-28 00:02:24.876932 | orchestrator | + network { 2026-01-28 00:02:24.876935 | orchestrator | + access_network = false 2026-01-28 00:02:24.876939 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-28 00:02:24.876943 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-28 00:02:24.876947 | orchestrator | + mac = (known after apply) 2026-01-28 00:02:24.876950 | orchestrator | + name = (known after apply) 2026-01-28 00:02:24.876954 | orchestrator | + port = (known after apply) 2026-01-28 00:02:24.876958 | orchestrator | + uuid = (known after apply) 2026-01-28 00:02:24.876962 | orchestrator | } 2026-01-28 00:02:24.876965 | orchestrator | } 2026-01-28 00:02:24.876969 | orchestrator | 2026-01-28 00:02:24.876976 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-01-28 00:02:24.876980 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-28 00:02:24.876983 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-28 00:02:24.876987 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-28 00:02:24.876991 | orchestrator | + all_metadata = (known after apply) 2026-01-28 00:02:24.876995 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.876999 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.877002 | orchestrator | + config_drive = true 2026-01-28 00:02:24.877006 | orchestrator | + created = (known after apply) 2026-01-28 00:02:24.877010 | orchestrator | + flavor_id = (known after apply) 2026-01-28 00:02:24.877014 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-28 00:02:24.877017 | orchestrator | + force_delete = false 2026-01-28 00:02:24.877021 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-28 00:02:24.877025 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.877028 | orchestrator | + image_id = (known after apply) 2026-01-28 00:02:24.877032 | orchestrator | + image_name = (known after apply) 2026-01-28 00:02:24.877036 | orchestrator | + key_pair = "testbed" 2026-01-28 00:02:24.877040 | orchestrator | + name = "testbed-node-3" 2026-01-28 00:02:24.877043 | orchestrator | + power_state = "active" 2026-01-28 00:02:24.877047 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.877051 | orchestrator | + security_groups = (known after apply) 2026-01-28 00:02:24.877055 | orchestrator | + stop_before_destroy = false 2026-01-28 00:02:24.877058 | orchestrator | + updated = (known after apply) 2026-01-28 00:02:24.877062 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-28 00:02:24.877066 | orchestrator | 2026-01-28 00:02:24.877070 | orchestrator | + block_device { 2026-01-28 00:02:24.877073 | orchestrator | + boot_index = 0 2026-01-28 00:02:24.877077 | orchestrator | + delete_on_termination = false 2026-01-28 00:02:24.877081 | orchestrator | + destination_type = "volume" 2026-01-28 00:02:24.877088 | orchestrator | + multiattach = false 2026-01-28 00:02:24.877092 | orchestrator | + source_type = "volume" 2026-01-28 00:02:24.877095 | orchestrator | + uuid = (known after apply) 2026-01-28 00:02:24.877099 | orchestrator | } 2026-01-28 00:02:24.877103 | orchestrator | 2026-01-28 00:02:24.877107 | orchestrator | + network { 2026-01-28 00:02:24.877111 | orchestrator | + access_network = false 2026-01-28 00:02:24.877114 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-28 00:02:24.877118 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-28 00:02:24.877122 | orchestrator | + mac = (known after apply) 2026-01-28 00:02:24.877125 | orchestrator | + name = (known after apply) 2026-01-28 00:02:24.877129 | orchestrator | + port = (known after apply) 2026-01-28 00:02:24.877133 | orchestrator | + uuid = (known after apply) 2026-01-28 00:02:24.877137 | orchestrator | } 2026-01-28 00:02:24.877141 | orchestrator | } 2026-01-28 00:02:24.877146 | orchestrator | 2026-01-28 00:02:24.877150 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-01-28 00:02:24.877154 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-28 00:02:24.877158 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-28 00:02:24.877161 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-28 00:02:24.877165 | orchestrator | + all_metadata = (known after apply) 2026-01-28 00:02:24.877169 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.877173 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.877176 | orchestrator | + config_drive = true 2026-01-28 00:02:24.877180 | orchestrator | + created = (known after apply) 2026-01-28 00:02:24.877184 | orchestrator | + flavor_id = (known after apply) 2026-01-28 00:02:24.877188 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-28 00:02:24.877192 | orchestrator | + force_delete = false 2026-01-28 00:02:24.877195 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-28 00:02:24.877199 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.877203 | orchestrator | + image_id = (known after apply) 2026-01-28 00:02:24.877206 | orchestrator | + image_name = (known after apply) 2026-01-28 00:02:24.877210 | orchestrator | + key_pair = "testbed" 2026-01-28 00:02:24.877214 | orchestrator | + name = "testbed-node-4" 2026-01-28 00:02:24.877218 | orchestrator | + power_state = "active" 2026-01-28 00:02:24.877222 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.877225 | orchestrator | + security_groups = (known after apply) 2026-01-28 00:02:24.877229 | orchestrator | + stop_before_destroy = false 2026-01-28 00:02:24.877233 | orchestrator | + updated = (known after apply) 2026-01-28 00:02:24.877237 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-28 00:02:24.877240 | orchestrator | 2026-01-28 00:02:24.877244 | orchestrator | + block_device { 2026-01-28 00:02:24.877248 | orchestrator | + boot_index = 0 2026-01-28 00:02:24.877252 | orchestrator | + delete_on_termination = false 2026-01-28 00:02:24.877255 | orchestrator | + destination_type = "volume" 2026-01-28 00:02:24.877259 | orchestrator | + multiattach = false 2026-01-28 00:02:24.877263 | orchestrator | + source_type = "volume" 2026-01-28 00:02:24.877267 | orchestrator | + uuid = (known after apply) 2026-01-28 00:02:24.877270 | orchestrator | } 2026-01-28 00:02:24.877274 | orchestrator | 2026-01-28 00:02:24.877278 | orchestrator | + network { 2026-01-28 00:02:24.877282 | orchestrator | + access_network = false 2026-01-28 00:02:24.877286 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-28 00:02:24.877289 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-28 00:02:24.877293 | orchestrator | + mac = (known after apply) 2026-01-28 00:02:24.877297 | orchestrator | + name = (known after apply) 2026-01-28 00:02:24.877301 | orchestrator | + port = (known after apply) 2026-01-28 00:02:24.877304 | orchestrator | + uuid = (known after apply) 2026-01-28 00:02:24.877308 | orchestrator | } 2026-01-28 00:02:24.877312 | orchestrator | } 2026-01-28 00:02:24.877320 | orchestrator | 2026-01-28 00:02:24.877324 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-01-28 00:02:24.877328 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-01-28 00:02:24.877332 | orchestrator | + access_ip_v4 = (known after apply) 2026-01-28 00:02:24.877336 | orchestrator | + access_ip_v6 = (known after apply) 2026-01-28 00:02:24.877340 | orchestrator | + all_metadata = (known after apply) 2026-01-28 00:02:24.877343 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.877347 | orchestrator | + availability_zone = "nova" 2026-01-28 00:02:24.877351 | orchestrator | + config_drive = true 2026-01-28 00:02:24.877355 | orchestrator | + created = (known after apply) 2026-01-28 00:02:24.877358 | orchestrator | + flavor_id = (known after apply) 2026-01-28 00:02:24.877362 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-01-28 00:02:24.877366 | orchestrator | + force_delete = false 2026-01-28 00:02:24.877370 | orchestrator | + hypervisor_hostname = (known after apply) 2026-01-28 00:02:24.877373 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.877377 | orchestrator | + image_id = (known after apply) 2026-01-28 00:02:24.877381 | orchestrator | + image_name = (known after apply) 2026-01-28 00:02:24.877385 | orchestrator | + key_pair = "testbed" 2026-01-28 00:02:24.877388 | orchestrator | + name = "testbed-node-5" 2026-01-28 00:02:24.877392 | orchestrator | + power_state = "active" 2026-01-28 00:02:24.877396 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.877400 | orchestrator | + security_groups = (known after apply) 2026-01-28 00:02:24.877403 | orchestrator | + stop_before_destroy = false 2026-01-28 00:02:24.877407 | orchestrator | + updated = (known after apply) 2026-01-28 00:02:24.877411 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-01-28 00:02:24.877415 | orchestrator | 2026-01-28 00:02:24.877418 | orchestrator | + block_device { 2026-01-28 00:02:24.877422 | orchestrator | + boot_index = 0 2026-01-28 00:02:24.877426 | orchestrator | + delete_on_termination = false 2026-01-28 00:02:24.877430 | orchestrator | + destination_type = "volume" 2026-01-28 00:02:24.877433 | orchestrator | + multiattach = false 2026-01-28 00:02:24.877437 | orchestrator | + source_type = "volume" 2026-01-28 00:02:24.877441 | orchestrator | + uuid = (known after apply) 2026-01-28 00:02:24.877445 | orchestrator | } 2026-01-28 00:02:24.877449 | orchestrator | 2026-01-28 00:02:24.877452 | orchestrator | + network { 2026-01-28 00:02:24.877544 | orchestrator | + access_network = false 2026-01-28 00:02:24.877550 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-01-28 00:02:24.877554 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-01-28 00:02:24.877558 | orchestrator | + mac = (known after apply) 2026-01-28 00:02:24.877562 | orchestrator | + name = (known after apply) 2026-01-28 00:02:24.877566 | orchestrator | + port = (known after apply) 2026-01-28 00:02:24.877570 | orchestrator | + uuid = (known after apply) 2026-01-28 00:02:24.877573 | orchestrator | } 2026-01-28 00:02:24.877577 | orchestrator | } 2026-01-28 00:02:24.877581 | orchestrator | 2026-01-28 00:02:24.877585 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-01-28 00:02:24.877589 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-01-28 00:02:24.877593 | orchestrator | + fingerprint = (known after apply) 2026-01-28 00:02:24.877596 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.877600 | orchestrator | + name = "testbed" 2026-01-28 00:02:24.877604 | orchestrator | + private_key = (sensitive value) 2026-01-28 00:02:24.877608 | orchestrator | + public_key = (known after apply) 2026-01-28 00:02:24.877611 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.877615 | orchestrator | + user_id = (known after apply) 2026-01-28 00:02:24.877619 | orchestrator | } 2026-01-28 00:02:24.877623 | orchestrator | 2026-01-28 00:02:24.877627 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-01-28 00:02:24.877630 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-28 00:02:24.877639 | orchestrator | + device = (known after apply) 2026-01-28 00:02:24.877643 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.877647 | orchestrator | + instance_id = (known after apply) 2026-01-28 00:02:24.877650 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.877657 | orchestrator | + volume_id = (known after apply) 2026-01-28 00:02:24.877661 | orchestrator | } 2026-01-28 00:02:24.877665 | orchestrator | 2026-01-28 00:02:24.877669 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-01-28 00:02:24.877673 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-28 00:02:24.877677 | orchestrator | + device = (known after apply) 2026-01-28 00:02:24.877680 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.877731 | orchestrator | + instance_id = (known after apply) 2026-01-28 00:02:24.877735 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.877739 | orchestrator | + volume_id = (known after apply) 2026-01-28 00:02:24.877743 | orchestrator | } 2026-01-28 00:02:24.877751 | orchestrator | 2026-01-28 00:02:24.877755 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-01-28 00:02:24.877759 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-28 00:02:24.877763 | orchestrator | + device = (known after apply) 2026-01-28 00:02:24.877767 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.877770 | orchestrator | + instance_id = (known after apply) 2026-01-28 00:02:24.877774 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.877778 | orchestrator | + volume_id = (known after apply) 2026-01-28 00:02:24.877782 | orchestrator | } 2026-01-28 00:02:24.877785 | orchestrator | 2026-01-28 00:02:24.877789 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-01-28 00:02:24.877793 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-28 00:02:24.877797 | orchestrator | + device = (known after apply) 2026-01-28 00:02:24.877800 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.877804 | orchestrator | + instance_id = (known after apply) 2026-01-28 00:02:24.877808 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.877812 | orchestrator | + volume_id = (known after apply) 2026-01-28 00:02:24.877815 | orchestrator | } 2026-01-28 00:02:24.877819 | orchestrator | 2026-01-28 00:02:24.877823 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-01-28 00:02:24.877827 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-28 00:02:24.877830 | orchestrator | + device = (known after apply) 2026-01-28 00:02:24.877834 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.877838 | orchestrator | + instance_id = (known after apply) 2026-01-28 00:02:24.877842 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.877845 | orchestrator | + volume_id = (known after apply) 2026-01-28 00:02:24.877849 | orchestrator | } 2026-01-28 00:02:24.877853 | orchestrator | 2026-01-28 00:02:24.877857 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-01-28 00:02:24.877861 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-28 00:02:24.877865 | orchestrator | + device = (known after apply) 2026-01-28 00:02:24.877868 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.877872 | orchestrator | + instance_id = (known after apply) 2026-01-28 00:02:24.877876 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.877879 | orchestrator | + volume_id = (known after apply) 2026-01-28 00:02:24.877883 | orchestrator | } 2026-01-28 00:02:24.877887 | orchestrator | 2026-01-28 00:02:24.877891 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-01-28 00:02:24.877895 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-28 00:02:24.877898 | orchestrator | + device = (known after apply) 2026-01-28 00:02:24.877902 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.877906 | orchestrator | + instance_id = (known after apply) 2026-01-28 00:02:24.877910 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.877925 | orchestrator | + volume_id = (known after apply) 2026-01-28 00:02:24.877929 | orchestrator | } 2026-01-28 00:02:24.877932 | orchestrator | 2026-01-28 00:02:24.877936 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-01-28 00:02:24.877940 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-28 00:02:24.877944 | orchestrator | + device = (known after apply) 2026-01-28 00:02:24.877948 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.877952 | orchestrator | + instance_id = (known after apply) 2026-01-28 00:02:24.877955 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.877959 | orchestrator | + volume_id = (known after apply) 2026-01-28 00:02:24.877963 | orchestrator | } 2026-01-28 00:02:24.877967 | orchestrator | 2026-01-28 00:02:24.877971 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-01-28 00:02:24.877974 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-01-28 00:02:24.877978 | orchestrator | + device = (known after apply) 2026-01-28 00:02:24.877982 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.877986 | orchestrator | + instance_id = (known after apply) 2026-01-28 00:02:24.877990 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.877993 | orchestrator | + volume_id = (known after apply) 2026-01-28 00:02:24.877997 | orchestrator | } 2026-01-28 00:02:24.878001 | orchestrator | 2026-01-28 00:02:24.878005 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-01-28 00:02:24.878009 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-01-28 00:02:24.878059 | orchestrator | + fixed_ip = (known after apply) 2026-01-28 00:02:24.878063 | orchestrator | + floating_ip = (known after apply) 2026-01-28 00:02:24.878067 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.878070 | orchestrator | + port_id = (known after apply) 2026-01-28 00:02:24.878074 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.878078 | orchestrator | } 2026-01-28 00:02:24.878082 | orchestrator | 2026-01-28 00:02:24.878085 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-01-28 00:02:24.878089 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-01-28 00:02:24.878093 | orchestrator | + address = (known after apply) 2026-01-28 00:02:24.878097 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.878103 | orchestrator | + dns_domain = (known after apply) 2026-01-28 00:02:24.878107 | orchestrator | + dns_name = (known after apply) 2026-01-28 00:02:24.878111 | orchestrator | + fixed_ip = (known after apply) 2026-01-28 00:02:24.878115 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.878119 | orchestrator | + pool = "public" 2026-01-28 00:02:24.878122 | orchestrator | + port_id = (known after apply) 2026-01-28 00:02:24.878126 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.878130 | orchestrator | + subnet_id = (known after apply) 2026-01-28 00:02:24.878134 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.878137 | orchestrator | } 2026-01-28 00:02:24.878144 | orchestrator | 2026-01-28 00:02:24.878148 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-01-28 00:02:24.878151 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-01-28 00:02:24.878155 | orchestrator | + admin_state_up = (known after apply) 2026-01-28 00:02:24.878159 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.878163 | orchestrator | + availability_zone_hints = [ 2026-01-28 00:02:24.878166 | orchestrator | + "nova", 2026-01-28 00:02:24.878170 | orchestrator | ] 2026-01-28 00:02:24.878174 | orchestrator | + dns_domain = (known after apply) 2026-01-28 00:02:24.878178 | orchestrator | + external = (known after apply) 2026-01-28 00:02:24.878182 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.878185 | orchestrator | + mtu = (known after apply) 2026-01-28 00:02:24.878189 | orchestrator | + name = "net-testbed-management" 2026-01-28 00:02:24.878193 | orchestrator | + port_security_enabled = (known after apply) 2026-01-28 00:02:24.878200 | orchestrator | + qos_policy_id = (known after apply) 2026-01-28 00:02:24.878204 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.878208 | orchestrator | + shared = (known after apply) 2026-01-28 00:02:24.878212 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.878215 | orchestrator | + transparent_vlan = (known after apply) 2026-01-28 00:02:24.878219 | orchestrator | 2026-01-28 00:02:24.878223 | orchestrator | + segments (known after apply) 2026-01-28 00:02:24.878227 | orchestrator | } 2026-01-28 00:02:24.878230 | orchestrator | 2026-01-28 00:02:24.878234 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-01-28 00:02:24.878238 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-01-28 00:02:24.878242 | orchestrator | + admin_state_up = (known after apply) 2026-01-28 00:02:24.878245 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-28 00:02:24.878249 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-28 00:02:24.878253 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.878257 | orchestrator | + device_id = (known after apply) 2026-01-28 00:02:24.878260 | orchestrator | + device_owner = (known after apply) 2026-01-28 00:02:24.878264 | orchestrator | + dns_assignment = (known after apply) 2026-01-28 00:02:24.878268 | orchestrator | + dns_name = (known after apply) 2026-01-28 00:02:24.878272 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.878275 | orchestrator | + mac_address = (known after apply) 2026-01-28 00:02:24.878279 | orchestrator | + network_id = (known after apply) 2026-01-28 00:02:24.878283 | orchestrator | + port_security_enabled = (known after apply) 2026-01-28 00:02:24.878287 | orchestrator | + qos_policy_id = (known after apply) 2026-01-28 00:02:24.878290 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.878294 | orchestrator | + security_group_ids = (known after apply) 2026-01-28 00:02:24.878298 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.878302 | orchestrator | 2026-01-28 00:02:24.878305 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.878309 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-28 00:02:24.878313 | orchestrator | } 2026-01-28 00:02:24.878317 | orchestrator | 2026-01-28 00:02:24.878320 | orchestrator | + binding (known after apply) 2026-01-28 00:02:24.878324 | orchestrator | 2026-01-28 00:02:24.878328 | orchestrator | + fixed_ip { 2026-01-28 00:02:24.878332 | orchestrator | + ip_address = "192.168.16.5" 2026-01-28 00:02:24.878336 | orchestrator | + subnet_id = (known after apply) 2026-01-28 00:02:24.878339 | orchestrator | } 2026-01-28 00:02:24.878343 | orchestrator | } 2026-01-28 00:02:24.882835 | orchestrator | 2026-01-28 00:02:24.882886 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-01-28 00:02:24.882892 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-28 00:02:24.882897 | orchestrator | + admin_state_up = (known after apply) 2026-01-28 00:02:24.882902 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-28 00:02:24.882905 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-28 00:02:24.882909 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.882913 | orchestrator | + device_id = (known after apply) 2026-01-28 00:02:24.882917 | orchestrator | + device_owner = (known after apply) 2026-01-28 00:02:24.882921 | orchestrator | + dns_assignment = (known after apply) 2026-01-28 00:02:24.882925 | orchestrator | + dns_name = (known after apply) 2026-01-28 00:02:24.882928 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.882932 | orchestrator | + mac_address = (known after apply) 2026-01-28 00:02:24.882936 | orchestrator | + network_id = (known after apply) 2026-01-28 00:02:24.882940 | orchestrator | + port_security_enabled = (known after apply) 2026-01-28 00:02:24.882943 | orchestrator | + qos_policy_id = (known after apply) 2026-01-28 00:02:24.882947 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.882963 | orchestrator | + security_group_ids = (known after apply) 2026-01-28 00:02:24.882967 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.882971 | orchestrator | 2026-01-28 00:02:24.882975 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.882979 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-28 00:02:24.882983 | orchestrator | } 2026-01-28 00:02:24.882987 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.882991 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-28 00:02:24.882994 | orchestrator | } 2026-01-28 00:02:24.882998 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.883002 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-28 00:02:24.883006 | orchestrator | } 2026-01-28 00:02:24.883009 | orchestrator | 2026-01-28 00:02:24.883013 | orchestrator | + binding (known after apply) 2026-01-28 00:02:24.883017 | orchestrator | 2026-01-28 00:02:24.883021 | orchestrator | + fixed_ip { 2026-01-28 00:02:24.883025 | orchestrator | + ip_address = "192.168.16.10" 2026-01-28 00:02:24.883029 | orchestrator | + subnet_id = (known after apply) 2026-01-28 00:02:24.883033 | orchestrator | } 2026-01-28 00:02:24.883037 | orchestrator | } 2026-01-28 00:02:24.883041 | orchestrator | 2026-01-28 00:02:24.883044 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-01-28 00:02:24.883048 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-28 00:02:24.883057 | orchestrator | + admin_state_up = (known after apply) 2026-01-28 00:02:24.883061 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-28 00:02:24.883065 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-28 00:02:24.883069 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.883073 | orchestrator | + device_id = (known after apply) 2026-01-28 00:02:24.883076 | orchestrator | + device_owner = (known after apply) 2026-01-28 00:02:24.883080 | orchestrator | + dns_assignment = (known after apply) 2026-01-28 00:02:24.883084 | orchestrator | + dns_name = (known after apply) 2026-01-28 00:02:24.883088 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.883092 | orchestrator | + mac_address = (known after apply) 2026-01-28 00:02:24.883096 | orchestrator | + network_id = (known after apply) 2026-01-28 00:02:24.883099 | orchestrator | + port_security_enabled = (known after apply) 2026-01-28 00:02:24.883103 | orchestrator | + qos_policy_id = (known after apply) 2026-01-28 00:02:24.883107 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.883111 | orchestrator | + security_group_ids = (known after apply) 2026-01-28 00:02:24.883114 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.883118 | orchestrator | 2026-01-28 00:02:24.883122 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.883126 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-28 00:02:24.883130 | orchestrator | } 2026-01-28 00:02:24.883134 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.883137 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-28 00:02:24.883141 | orchestrator | } 2026-01-28 00:02:24.883145 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.883149 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-28 00:02:24.883152 | orchestrator | } 2026-01-28 00:02:24.883156 | orchestrator | 2026-01-28 00:02:24.883160 | orchestrator | + binding (known after apply) 2026-01-28 00:02:24.883164 | orchestrator | 2026-01-28 00:02:24.883167 | orchestrator | + fixed_ip { 2026-01-28 00:02:24.883171 | orchestrator | + ip_address = "192.168.16.11" 2026-01-28 00:02:24.883175 | orchestrator | + subnet_id = (known after apply) 2026-01-28 00:02:24.883179 | orchestrator | } 2026-01-28 00:02:24.883183 | orchestrator | } 2026-01-28 00:02:24.883186 | orchestrator | 2026-01-28 00:02:24.883190 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-01-28 00:02:24.883194 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-28 00:02:24.883198 | orchestrator | + admin_state_up = (known after apply) 2026-01-28 00:02:24.883202 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-28 00:02:24.883206 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-28 00:02:24.883210 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.883217 | orchestrator | + device_id = (known after apply) 2026-01-28 00:02:24.883221 | orchestrator | + device_owner = (known after apply) 2026-01-28 00:02:24.883224 | orchestrator | + dns_assignment = (known after apply) 2026-01-28 00:02:24.883228 | orchestrator | + dns_name = (known after apply) 2026-01-28 00:02:24.883232 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.883236 | orchestrator | + mac_address = (known after apply) 2026-01-28 00:02:24.883240 | orchestrator | + network_id = (known after apply) 2026-01-28 00:02:24.883243 | orchestrator | + port_security_enabled = (known after apply) 2026-01-28 00:02:24.883247 | orchestrator | + qos_policy_id = (known after apply) 2026-01-28 00:02:24.883251 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.883255 | orchestrator | + security_group_ids = (known after apply) 2026-01-28 00:02:24.883258 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.883262 | orchestrator | 2026-01-28 00:02:24.883266 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.883270 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-28 00:02:24.883273 | orchestrator | } 2026-01-28 00:02:24.883277 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.883281 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-28 00:02:24.883285 | orchestrator | } 2026-01-28 00:02:24.883288 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.883292 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-28 00:02:24.883296 | orchestrator | } 2026-01-28 00:02:24.883300 | orchestrator | 2026-01-28 00:02:24.883312 | orchestrator | + binding (known after apply) 2026-01-28 00:02:24.883316 | orchestrator | 2026-01-28 00:02:24.883320 | orchestrator | + fixed_ip { 2026-01-28 00:02:24.883324 | orchestrator | + ip_address = "192.168.16.12" 2026-01-28 00:02:24.883327 | orchestrator | + subnet_id = (known after apply) 2026-01-28 00:02:24.883331 | orchestrator | } 2026-01-28 00:02:24.883335 | orchestrator | } 2026-01-28 00:02:24.883339 | orchestrator | 2026-01-28 00:02:24.883343 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-01-28 00:02:24.883346 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-28 00:02:24.883350 | orchestrator | + admin_state_up = (known after apply) 2026-01-28 00:02:24.883354 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-28 00:02:24.883358 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-28 00:02:24.883361 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.883365 | orchestrator | + device_id = (known after apply) 2026-01-28 00:02:24.883369 | orchestrator | + device_owner = (known after apply) 2026-01-28 00:02:24.883373 | orchestrator | + dns_assignment = (known after apply) 2026-01-28 00:02:24.883376 | orchestrator | + dns_name = (known after apply) 2026-01-28 00:02:24.883380 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.883384 | orchestrator | + mac_address = (known after apply) 2026-01-28 00:02:24.883388 | orchestrator | + network_id = (known after apply) 2026-01-28 00:02:24.883391 | orchestrator | + port_security_enabled = (known after apply) 2026-01-28 00:02:24.883395 | orchestrator | + qos_policy_id = (known after apply) 2026-01-28 00:02:24.883399 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.883403 | orchestrator | + security_group_ids = (known after apply) 2026-01-28 00:02:24.883406 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.883410 | orchestrator | 2026-01-28 00:02:24.883414 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.883418 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-28 00:02:24.883421 | orchestrator | } 2026-01-28 00:02:24.883425 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.883429 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-28 00:02:24.883433 | orchestrator | } 2026-01-28 00:02:24.883436 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.883440 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-28 00:02:24.883444 | orchestrator | } 2026-01-28 00:02:24.883448 | orchestrator | 2026-01-28 00:02:24.883472 | orchestrator | + binding (known after apply) 2026-01-28 00:02:24.883506 | orchestrator | 2026-01-28 00:02:24.883510 | orchestrator | + fixed_ip { 2026-01-28 00:02:24.883514 | orchestrator | + ip_address = "192.168.16.13" 2026-01-28 00:02:24.883518 | orchestrator | + subnet_id = (known after apply) 2026-01-28 00:02:24.883522 | orchestrator | } 2026-01-28 00:02:24.883525 | orchestrator | } 2026-01-28 00:02:24.883529 | orchestrator | 2026-01-28 00:02:24.883533 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-01-28 00:02:24.883537 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-28 00:02:24.883541 | orchestrator | + admin_state_up = (known after apply) 2026-01-28 00:02:24.883544 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-28 00:02:24.883548 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-28 00:02:24.883552 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.883556 | orchestrator | + device_id = (known after apply) 2026-01-28 00:02:24.883559 | orchestrator | + device_owner = (known after apply) 2026-01-28 00:02:24.883563 | orchestrator | + dns_assignment = (known after apply) 2026-01-28 00:02:24.883567 | orchestrator | + dns_name = (known after apply) 2026-01-28 00:02:24.883573 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.883577 | orchestrator | + mac_address = (known after apply) 2026-01-28 00:02:24.883581 | orchestrator | + network_id = (known after apply) 2026-01-28 00:02:24.883585 | orchestrator | + port_security_enabled = (known after apply) 2026-01-28 00:02:24.883589 | orchestrator | + qos_policy_id = (known after apply) 2026-01-28 00:02:24.883592 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.883596 | orchestrator | + security_group_ids = (known after apply) 2026-01-28 00:02:24.883600 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.883621 | orchestrator | 2026-01-28 00:02:24.883625 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.883631 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-28 00:02:24.883635 | orchestrator | } 2026-01-28 00:02:24.883639 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.883643 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-28 00:02:24.883646 | orchestrator | } 2026-01-28 00:02:24.883650 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.883654 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-28 00:02:24.883658 | orchestrator | } 2026-01-28 00:02:24.883661 | orchestrator | 2026-01-28 00:02:24.883665 | orchestrator | + binding (known after apply) 2026-01-28 00:02:24.883669 | orchestrator | 2026-01-28 00:02:24.883673 | orchestrator | + fixed_ip { 2026-01-28 00:02:24.883676 | orchestrator | + ip_address = "192.168.16.14" 2026-01-28 00:02:24.883680 | orchestrator | + subnet_id = (known after apply) 2026-01-28 00:02:24.883684 | orchestrator | } 2026-01-28 00:02:24.883688 | orchestrator | } 2026-01-28 00:02:24.883691 | orchestrator | 2026-01-28 00:02:24.883695 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-01-28 00:02:24.883699 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-01-28 00:02:24.883703 | orchestrator | + admin_state_up = (known after apply) 2026-01-28 00:02:24.883707 | orchestrator | + all_fixed_ips = (known after apply) 2026-01-28 00:02:24.883710 | orchestrator | + all_security_group_ids = (known after apply) 2026-01-28 00:02:24.883714 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.883718 | orchestrator | + device_id = (known after apply) 2026-01-28 00:02:24.883721 | orchestrator | + device_owner = (known after apply) 2026-01-28 00:02:24.883725 | orchestrator | + dns_assignment = (known after apply) 2026-01-28 00:02:24.883729 | orchestrator | + dns_name = (known after apply) 2026-01-28 00:02:24.883733 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.883736 | orchestrator | + mac_address = (known after apply) 2026-01-28 00:02:24.883740 | orchestrator | + network_id = (known after apply) 2026-01-28 00:02:24.883744 | orchestrator | + port_security_enabled = (known after apply) 2026-01-28 00:02:24.883747 | orchestrator | + qos_policy_id = (known after apply) 2026-01-28 00:02:24.883755 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.883758 | orchestrator | + security_group_ids = (known after apply) 2026-01-28 00:02:24.883762 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.883766 | orchestrator | 2026-01-28 00:02:24.883770 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.883773 | orchestrator | + ip_address = "192.168.16.254/32" 2026-01-28 00:02:24.883777 | orchestrator | } 2026-01-28 00:02:24.883781 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.883788 | orchestrator | + ip_address = "192.168.16.8/32" 2026-01-28 00:02:24.883792 | orchestrator | } 2026-01-28 00:02:24.883795 | orchestrator | + allowed_address_pairs { 2026-01-28 00:02:24.883799 | orchestrator | + ip_address = "192.168.16.9/32" 2026-01-28 00:02:24.883803 | orchestrator | } 2026-01-28 00:02:24.883807 | orchestrator | 2026-01-28 00:02:24.883811 | orchestrator | + binding (known after apply) 2026-01-28 00:02:24.883815 | orchestrator | 2026-01-28 00:02:24.883818 | orchestrator | + fixed_ip { 2026-01-28 00:02:24.883822 | orchestrator | + ip_address = "192.168.16.15" 2026-01-28 00:02:24.883826 | orchestrator | + subnet_id = (known after apply) 2026-01-28 00:02:24.883830 | orchestrator | } 2026-01-28 00:02:24.883833 | orchestrator | } 2026-01-28 00:02:24.883837 | orchestrator | 2026-01-28 00:02:24.883841 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-01-28 00:02:24.883845 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-01-28 00:02:24.883849 | orchestrator | + force_destroy = false 2026-01-28 00:02:24.883853 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.883857 | orchestrator | + port_id = (known after apply) 2026-01-28 00:02:24.883860 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.883864 | orchestrator | + router_id = (known after apply) 2026-01-28 00:02:24.883868 | orchestrator | + subnet_id = (known after apply) 2026-01-28 00:02:24.883872 | orchestrator | } 2026-01-28 00:02:24.883876 | orchestrator | 2026-01-28 00:02:24.883879 | orchestrator | # openstack_networking_router_v2.router will be created 2026-01-28 00:02:24.883883 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-01-28 00:02:24.883887 | orchestrator | + admin_state_up = (known after apply) 2026-01-28 00:02:24.883890 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.883894 | orchestrator | + availability_zone_hints = [ 2026-01-28 00:02:24.883898 | orchestrator | + "nova", 2026-01-28 00:02:24.883902 | orchestrator | ] 2026-01-28 00:02:24.883905 | orchestrator | + distributed = (known after apply) 2026-01-28 00:02:24.883909 | orchestrator | + enable_snat = (known after apply) 2026-01-28 00:02:24.883913 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-01-28 00:02:24.883917 | orchestrator | + external_qos_policy_id = (known after apply) 2026-01-28 00:02:24.883920 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.883924 | orchestrator | + name = "testbed" 2026-01-28 00:02:24.883928 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.883932 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.883936 | orchestrator | 2026-01-28 00:02:24.883939 | orchestrator | + external_fixed_ip (known after apply) 2026-01-28 00:02:24.883943 | orchestrator | } 2026-01-28 00:02:24.883947 | orchestrator | 2026-01-28 00:02:24.883951 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-01-28 00:02:24.883956 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-01-28 00:02:24.883960 | orchestrator | + description = "ssh" 2026-01-28 00:02:24.883964 | orchestrator | + direction = "ingress" 2026-01-28 00:02:24.883967 | orchestrator | + ethertype = "IPv4" 2026-01-28 00:02:24.883971 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.883975 | orchestrator | + port_range_max = 22 2026-01-28 00:02:24.883979 | orchestrator | + port_range_min = 22 2026-01-28 00:02:24.883983 | orchestrator | + protocol = "tcp" 2026-01-28 00:02:24.883986 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.883993 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-28 00:02:24.883997 | orchestrator | + remote_group_id = (known after apply) 2026-01-28 00:02:24.884000 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-28 00:02:24.884004 | orchestrator | + security_group_id = (known after apply) 2026-01-28 00:02:24.884008 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.884012 | orchestrator | } 2026-01-28 00:02:24.884015 | orchestrator | 2026-01-28 00:02:24.884019 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-01-28 00:02:24.884023 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-01-28 00:02:24.884027 | orchestrator | + description = "wireguard" 2026-01-28 00:02:24.884031 | orchestrator | + direction = "ingress" 2026-01-28 00:02:24.884034 | orchestrator | + ethertype = "IPv4" 2026-01-28 00:02:24.884038 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.884042 | orchestrator | + port_range_max = 51820 2026-01-28 00:02:24.884046 | orchestrator | + port_range_min = 51820 2026-01-28 00:02:24.884049 | orchestrator | + protocol = "udp" 2026-01-28 00:02:24.884053 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.884057 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-28 00:02:24.884061 | orchestrator | + remote_group_id = (known after apply) 2026-01-28 00:02:24.884064 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-28 00:02:24.884068 | orchestrator | + security_group_id = (known after apply) 2026-01-28 00:02:24.884072 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.884076 | orchestrator | } 2026-01-28 00:02:24.884079 | orchestrator | 2026-01-28 00:02:24.884083 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-01-28 00:02:24.884087 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-01-28 00:02:24.884093 | orchestrator | + direction = "ingress" 2026-01-28 00:02:24.884097 | orchestrator | + ethertype = "IPv4" 2026-01-28 00:02:24.884101 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.884105 | orchestrator | + protocol = "tcp" 2026-01-28 00:02:24.884109 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.884112 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-28 00:02:24.884116 | orchestrator | + remote_group_id = (known after apply) 2026-01-28 00:02:24.884120 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-28 00:02:24.884123 | orchestrator | + security_group_id = (known after apply) 2026-01-28 00:02:24.884127 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.884131 | orchestrator | } 2026-01-28 00:02:24.884135 | orchestrator | 2026-01-28 00:02:24.884139 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-01-28 00:02:24.884142 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-01-28 00:02:24.884146 | orchestrator | + direction = "ingress" 2026-01-28 00:02:24.884150 | orchestrator | + ethertype = "IPv4" 2026-01-28 00:02:24.884154 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.884160 | orchestrator | + protocol = "udp" 2026-01-28 00:02:24.884164 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.884168 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-28 00:02:24.884172 | orchestrator | + remote_group_id = (known after apply) 2026-01-28 00:02:24.884176 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-01-28 00:02:24.884179 | orchestrator | + security_group_id = (known after apply) 2026-01-28 00:02:24.884183 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.884187 | orchestrator | } 2026-01-28 00:02:24.884192 | orchestrator | 2026-01-28 00:02:24.884198 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-01-28 00:02:24.884208 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-01-28 00:02:24.884214 | orchestrator | + direction = "ingress" 2026-01-28 00:02:24.884220 | orchestrator | + ethertype = "IPv4" 2026-01-28 00:02:24.884225 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.884232 | orchestrator | + protocol = "icmp" 2026-01-28 00:02:24.884237 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.884241 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-28 00:02:24.884245 | orchestrator | + remote_group_id = (known after apply) 2026-01-28 00:02:24.884249 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-28 00:02:24.884253 | orchestrator | + security_group_id = (known after apply) 2026-01-28 00:02:24.884256 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.884260 | orchestrator | } 2026-01-28 00:02:24.884264 | orchestrator | 2026-01-28 00:02:24.884268 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-01-28 00:02:24.884271 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-01-28 00:02:24.884275 | orchestrator | + direction = "ingress" 2026-01-28 00:02:24.884279 | orchestrator | + ethertype = "IPv4" 2026-01-28 00:02:24.884283 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.884286 | orchestrator | + protocol = "tcp" 2026-01-28 00:02:24.884290 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.884294 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-28 00:02:24.884298 | orchestrator | + remote_group_id = (known after apply) 2026-01-28 00:02:24.884301 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-28 00:02:24.884305 | orchestrator | + security_group_id = (known after apply) 2026-01-28 00:02:24.884309 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.884312 | orchestrator | } 2026-01-28 00:02:24.884316 | orchestrator | 2026-01-28 00:02:24.884320 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-01-28 00:02:24.884324 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-01-28 00:02:24.884327 | orchestrator | + direction = "ingress" 2026-01-28 00:02:24.884331 | orchestrator | + ethertype = "IPv4" 2026-01-28 00:02:24.884335 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.884339 | orchestrator | + protocol = "udp" 2026-01-28 00:02:24.884342 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.884346 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-28 00:02:24.884350 | orchestrator | + remote_group_id = (known after apply) 2026-01-28 00:02:24.884354 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-28 00:02:24.884357 | orchestrator | + security_group_id = (known after apply) 2026-01-28 00:02:24.884361 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.884365 | orchestrator | } 2026-01-28 00:02:24.884368 | orchestrator | 2026-01-28 00:02:24.884372 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-01-28 00:02:24.884376 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-01-28 00:02:24.884380 | orchestrator | + direction = "ingress" 2026-01-28 00:02:24.884384 | orchestrator | + ethertype = "IPv4" 2026-01-28 00:02:24.884387 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.884391 | orchestrator | + protocol = "icmp" 2026-01-28 00:02:24.884395 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.884399 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-28 00:02:24.884402 | orchestrator | + remote_group_id = (known after apply) 2026-01-28 00:02:24.884406 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-28 00:02:24.884410 | orchestrator | + security_group_id = (known after apply) 2026-01-28 00:02:24.884413 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.884420 | orchestrator | } 2026-01-28 00:02:24.884424 | orchestrator | 2026-01-28 00:02:24.884428 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-01-28 00:02:24.884432 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-01-28 00:02:24.884435 | orchestrator | + description = "vrrp" 2026-01-28 00:02:24.884439 | orchestrator | + direction = "ingress" 2026-01-28 00:02:24.884443 | orchestrator | + ethertype = "IPv4" 2026-01-28 00:02:24.884447 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.884450 | orchestrator | + protocol = "112" 2026-01-28 00:02:24.884468 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.884474 | orchestrator | + remote_address_group_id = (known after apply) 2026-01-28 00:02:24.884481 | orchestrator | + remote_group_id = (known after apply) 2026-01-28 00:02:24.884486 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-01-28 00:02:24.884492 | orchestrator | + security_group_id = (known after apply) 2026-01-28 00:02:24.884499 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.884505 | orchestrator | } 2026-01-28 00:02:24.884511 | orchestrator | 2026-01-28 00:02:24.884517 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-01-28 00:02:24.884524 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-01-28 00:02:24.884527 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.884531 | orchestrator | + description = "management security group" 2026-01-28 00:02:24.884535 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.884539 | orchestrator | + name = "testbed-management" 2026-01-28 00:02:24.884546 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.884550 | orchestrator | + stateful = (known after apply) 2026-01-28 00:02:24.884553 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.884557 | orchestrator | } 2026-01-28 00:02:24.884561 | orchestrator | 2026-01-28 00:02:24.884565 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-01-28 00:02:24.884569 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-01-28 00:02:24.884572 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.884576 | orchestrator | + description = "node security group" 2026-01-28 00:02:24.884580 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.884584 | orchestrator | + name = "testbed-node" 2026-01-28 00:02:24.884588 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.884591 | orchestrator | + stateful = (known after apply) 2026-01-28 00:02:24.884595 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.884599 | orchestrator | } 2026-01-28 00:02:24.884603 | orchestrator | 2026-01-28 00:02:24.884607 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-01-28 00:02:24.884610 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-01-28 00:02:24.884614 | orchestrator | + all_tags = (known after apply) 2026-01-28 00:02:24.884618 | orchestrator | + cidr = "192.168.16.0/20" 2026-01-28 00:02:24.884622 | orchestrator | + dns_nameservers = [ 2026-01-28 00:02:24.884626 | orchestrator | + "8.8.8.8", 2026-01-28 00:02:24.884630 | orchestrator | + "9.9.9.9", 2026-01-28 00:02:24.884633 | orchestrator | ] 2026-01-28 00:02:24.884637 | orchestrator | + enable_dhcp = true 2026-01-28 00:02:24.884641 | orchestrator | + gateway_ip = (known after apply) 2026-01-28 00:02:24.884651 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.884655 | orchestrator | + ip_version = 4 2026-01-28 00:02:24.884659 | orchestrator | + ipv6_address_mode = (known after apply) 2026-01-28 00:02:24.884663 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-01-28 00:02:24.884667 | orchestrator | + name = "subnet-testbed-management" 2026-01-28 00:02:24.884671 | orchestrator | + network_id = (known after apply) 2026-01-28 00:02:24.884675 | orchestrator | + no_gateway = false 2026-01-28 00:02:24.884678 | orchestrator | + region = (known after apply) 2026-01-28 00:02:24.884682 | orchestrator | + service_types = (known after apply) 2026-01-28 00:02:24.884689 | orchestrator | + tenant_id = (known after apply) 2026-01-28 00:02:24.884693 | orchestrator | 2026-01-28 00:02:24.884697 | orchestrator | + allocation_pool { 2026-01-28 00:02:24.884701 | orchestrator | + end = "192.168.31.250" 2026-01-28 00:02:24.884705 | orchestrator | + start = "192.168.31.200" 2026-01-28 00:02:24.884709 | orchestrator | } 2026-01-28 00:02:24.884712 | orchestrator | } 2026-01-28 00:02:24.884716 | orchestrator | 2026-01-28 00:02:24.884720 | orchestrator | # terraform_data.image will be created 2026-01-28 00:02:24.884724 | orchestrator | + resource "terraform_data" "image" { 2026-01-28 00:02:24.884728 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.884731 | orchestrator | + input = "Ubuntu 24.04" 2026-01-28 00:02:24.884735 | orchestrator | + output = (known after apply) 2026-01-28 00:02:24.884739 | orchestrator | } 2026-01-28 00:02:24.884743 | orchestrator | 2026-01-28 00:02:24.884747 | orchestrator | # terraform_data.image_node will be created 2026-01-28 00:02:24.884750 | orchestrator | + resource "terraform_data" "image_node" { 2026-01-28 00:02:24.884754 | orchestrator | + id = (known after apply) 2026-01-28 00:02:24.884758 | orchestrator | + input = "Ubuntu 24.04" 2026-01-28 00:02:24.884762 | orchestrator | + output = (known after apply) 2026-01-28 00:02:24.884765 | orchestrator | } 2026-01-28 00:02:24.884769 | orchestrator | 2026-01-28 00:02:24.884773 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-01-28 00:02:24.884777 | orchestrator | 2026-01-28 00:02:24.884781 | orchestrator | Changes to Outputs: 2026-01-28 00:02:24.884785 | orchestrator | + manager_address = (sensitive value) 2026-01-28 00:02:24.884788 | orchestrator | + private_key = (sensitive value) 2026-01-28 00:02:25.407816 | orchestrator | terraform_data.image_node: Creating... 2026-01-28 00:02:25.408548 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=5645039c-5886-1a01-6a8e-3e8c95d63925] 2026-01-28 00:02:25.409491 | orchestrator | terraform_data.image: Creating... 2026-01-28 00:02:25.414083 | orchestrator | terraform_data.image: Creation complete after 0s [id=c75ad030-f6b9-bbba-a973-2f368156e5c8] 2026-01-28 00:02:25.426053 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-01-28 00:02:25.432526 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-01-28 00:02:25.432573 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-01-28 00:02:25.442135 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-01-28 00:02:25.442204 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-01-28 00:02:25.447412 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-01-28 00:02:25.449345 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-01-28 00:02:25.456995 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-01-28 00:02:25.457035 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-01-28 00:02:25.459359 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-01-28 00:02:25.915153 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-28 00:02:25.919741 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-01-28 00:02:25.956340 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-01-28 00:02:25.958253 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-01-28 00:02:26.015878 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-01-28 00:02:26.022392 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-01-28 00:02:26.815610 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=bc263260-0fd5-4c8e-9cbd-ffd6f020b3af] 2026-01-28 00:02:26.827689 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-01-28 00:02:29.109009 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=b987a0a7-5a55-41a6-ab39-84821076e11d] 2026-01-28 00:02:29.130193 | orchestrator | local_file.id_rsa_pub: Creating... 2026-01-28 00:02:29.145896 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=16c577848d9cb60bbee35cb15a7a188fcaef444f] 2026-01-28 00:02:29.148685 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=28bc48d4-f1f3-45fc-825f-eba8771d5ae9] 2026-01-28 00:02:29.152993 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-01-28 00:02:29.154132 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=ac1a11ac-4fa1-43c2-9cb0-18dea5100f59] 2026-01-28 00:02:29.157348 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-01-28 00:02:29.160089 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=73ccfbcc-79e9-4762-9da0-bda867b64772] 2026-01-28 00:02:29.160127 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-01-28 00:02:29.168760 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=c22ca58b93e0d137205304dabfb351f22dbd562f] 2026-01-28 00:02:29.171221 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-01-28 00:02:29.171257 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=f01b7d80-edbf-4a4b-9318-1b6b20cc249d] 2026-01-28 00:02:29.183371 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-01-28 00:02:29.183543 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=3b42e8f1-b37c-4f60-8295-3641607d148d] 2026-01-28 00:02:29.189315 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-01-28 00:02:29.191773 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-01-28 00:02:29.211383 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=eefe2a4b-8f8c-4873-9530-ac9327ae5f1f] 2026-01-28 00:02:29.217552 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=aa542874-8b0a-406e-9706-56af76962c37] 2026-01-28 00:02:29.219743 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-01-28 00:02:29.230419 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=1ddb64ab-bdcb-47d2-8e6e-2a5ea470c250] 2026-01-28 00:02:30.248106 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=dfb1ada9-d218-4c15-a09b-eaf5e1563f2d] 2026-01-28 00:02:30.995387 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=ca65200e-8eff-41e7-bb30-03bdbe16300a] 2026-01-28 00:02:31.002389 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-01-28 00:02:32.567502 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=803306ce-e622-41b7-ac52-96a9edfbbdc2] 2026-01-28 00:02:32.598775 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=6a73c03e-ec93-4e83-874f-d58572852c6e] 2026-01-28 00:02:32.627172 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=c6f74680-d470-418b-9174-209ebb6c671b] 2026-01-28 00:02:32.654472 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=0612b1ae-ec08-4713-9db6-5b0c740ef835] 2026-01-28 00:02:32.669251 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=d9f799c7-1a34-4b7c-88c1-e9cf002fdca2] 2026-01-28 00:02:32.671219 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=d8da0d7b-f707-4e6a-9b76-8e65b0275701] 2026-01-28 00:02:34.278276 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=47fc520e-0a2e-42c2-ac80-3286df6cb1e3] 2026-01-28 00:02:34.285054 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-01-28 00:02:34.285114 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-01-28 00:02:34.289508 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-01-28 00:02:34.503006 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=b7ddbec0-879b-4088-9153-4cc1ec8f644f] 2026-01-28 00:02:34.518771 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-01-28 00:02:34.521344 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-01-28 00:02:34.521528 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-01-28 00:02:34.524365 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-01-28 00:02:34.524954 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-01-28 00:02:34.526278 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-01-28 00:02:34.531774 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-01-28 00:02:34.536988 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-01-28 00:02:34.969332 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=accde7d5-0176-4382-94c6-1d35b58bdde1] 2026-01-28 00:02:34.986212 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-01-28 00:02:35.069033 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=9e42ba83-05ea-4c64-bb94-6d6ac3f8000d] 2026-01-28 00:02:35.083894 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-01-28 00:02:35.503514 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=a13f6cc3-eb55-43e7-aaf3-67fd67eeee3b] 2026-01-28 00:02:35.510562 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-01-28 00:02:35.571799 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=69fc9f49-ff67-4d42-8244-b33fac5dd4b1] 2026-01-28 00:02:35.578005 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-01-28 00:02:35.749774 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=3fab9a24-7426-462b-922c-caf37cb8fe92] 2026-01-28 00:02:35.755964 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-01-28 00:02:35.758385 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=842c58fb-bf6f-43e6-ad3b-8e85d68e0ac6] 2026-01-28 00:02:35.764734 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-01-28 00:02:35.848196 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=010d2593-5643-4592-b48e-ac35a9a2856a] 2026-01-28 00:02:35.855268 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-01-28 00:02:36.067291 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=728059df-3447-49fe-bae8-6140415cdb7e] 2026-01-28 00:02:36.082758 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-01-28 00:02:36.414123 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=ce82899a-be6c-4e73-8cfd-05479f8ebf2b] 2026-01-28 00:02:36.558776 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=044ed59a-97d9-41df-a251-22225adbfbde] 2026-01-28 00:02:36.908672 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=af801ed9-344b-4e26-bc1b-b501f8a8ea2f] 2026-01-28 00:02:37.144890 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=a30f1010-741a-4c9f-b820-01da782f7c16] 2026-01-28 00:02:37.444988 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 2s [id=dbd34f11-1e7e-4a19-a004-491c4847f751] 2026-01-28 00:02:37.596857 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=76a4af40-9518-44c0-b8b8-dd052ad00359] 2026-01-28 00:02:37.671647 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 3s [id=99ffdf98-dac8-4509-8b8e-8483b0930af8] 2026-01-28 00:02:38.051100 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 2s [id=69c893a7-4180-492a-b071-39f59fbf78ca] 2026-01-28 00:02:38.811017 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=1a60ca1b-0585-4879-8828-d42e803b13a5] 2026-01-28 00:02:38.831529 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-01-28 00:02:38.837911 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-01-28 00:02:38.849644 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-01-28 00:02:38.849950 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-01-28 00:02:38.851800 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-01-28 00:02:38.868050 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-01-28 00:02:38.868727 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-01-28 00:02:39.062358 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 3s [id=bccb4d58-1af5-461e-a93e-3ff494f6e57e] 2026-01-28 00:02:41.001825 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=74cbe669-f0e8-4a14-9f10-761f40d066ad] 2026-01-28 00:02:41.012335 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-01-28 00:02:41.015116 | orchestrator | local_file.inventory: Creating... 2026-01-28 00:02:41.020857 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-01-28 00:02:41.023884 | orchestrator | local_file.inventory: Creation complete after 0s [id=a6cc5fa739c63fa084c74d51c0d00a7fb3eef83e] 2026-01-28 00:02:41.026718 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=8b10bab329e7687faa6afac2a0fd4ced05001755] 2026-01-28 00:02:42.585080 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=74cbe669-f0e8-4a14-9f10-761f40d066ad] 2026-01-28 00:02:48.842528 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-01-28 00:02:48.854952 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-01-28 00:02:48.855032 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-01-28 00:02:48.855047 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-01-28 00:02:48.871159 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-01-28 00:02:48.871237 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-01-28 00:02:58.849861 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-01-28 00:02:58.855463 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-01-28 00:02:58.855566 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-01-28 00:02:58.855709 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-01-28 00:02:58.872168 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-01-28 00:02:58.872244 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-01-28 00:03:08.850268 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-01-28 00:03:08.856721 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-01-28 00:03:08.856810 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-01-28 00:03:08.856826 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-01-28 00:03:08.873099 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-01-28 00:03:08.873196 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-01-28 00:03:18.859357 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-01-28 00:03:18.859498 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-01-28 00:03:18.859573 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-01-28 00:03:18.859594 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-01-28 00:03:18.873970 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-01-28 00:03:18.874129 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-01-28 00:03:19.694088 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=fdc0f9b9-1c59-4d0f-93e1-2bb1bfc05183] 2026-01-28 00:03:20.031843 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=d1db63bb-9db1-491b-a306-b96650090d47] 2026-01-28 00:03:28.867630 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-01-28 00:03:28.867782 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-01-28 00:03:28.867812 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-01-28 00:03:28.875053 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-01-28 00:03:30.406496 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 51s [id=c3d31ed7-7f3f-424d-9753-2bed5707eb23] 2026-01-28 00:03:30.629122 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 52s [id=662e1a4c-a9a9-4c27-bba0-958a6546b8d2] 2026-01-28 00:03:38.874275 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m0s elapsed] 2026-01-28 00:03:38.874483 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [1m0s elapsed] 2026-01-28 00:03:39.871445 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 1m1s [id=ab69f658-ba70-43a3-813a-675890ce81e2] 2026-01-28 00:03:40.634674 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 1m2s [id=df698fdf-6718-423a-9204-0e9c6ef90c8d] 2026-01-28 00:03:40.650107 | orchestrator | null_resource.node_semaphore: Creating... 2026-01-28 00:03:40.658980 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=3546525931870183922] 2026-01-28 00:03:40.663475 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-01-28 00:03:40.663626 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-01-28 00:03:40.670897 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-01-28 00:03:40.671026 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-01-28 00:03:40.675345 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-01-28 00:03:40.681156 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-01-28 00:03:40.681804 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-01-28 00:03:40.697770 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-01-28 00:03:40.701037 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-01-28 00:03:40.702804 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-01-28 00:03:44.124805 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=d1db63bb-9db1-491b-a306-b96650090d47/ac1a11ac-4fa1-43c2-9cb0-18dea5100f59] 2026-01-28 00:03:44.136935 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=fdc0f9b9-1c59-4d0f-93e1-2bb1bfc05183/28bc48d4-f1f3-45fc-825f-eba8771d5ae9] 2026-01-28 00:03:44.149914 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=ab69f658-ba70-43a3-813a-675890ce81e2/f01b7d80-edbf-4a4b-9318-1b6b20cc249d] 2026-01-28 00:03:44.176450 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=d1db63bb-9db1-491b-a306-b96650090d47/b987a0a7-5a55-41a6-ab39-84821076e11d] 2026-01-28 00:03:44.189150 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=fdc0f9b9-1c59-4d0f-93e1-2bb1bfc05183/aa542874-8b0a-406e-9706-56af76962c37] 2026-01-28 00:03:44.213336 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=ab69f658-ba70-43a3-813a-675890ce81e2/eefe2a4b-8f8c-4873-9530-ac9327ae5f1f] 2026-01-28 00:03:50.275816 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=fdc0f9b9-1c59-4d0f-93e1-2bb1bfc05183/3b42e8f1-b37c-4f60-8295-3641607d148d] 2026-01-28 00:03:50.283004 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=d1db63bb-9db1-491b-a306-b96650090d47/1ddb64ab-bdcb-47d2-8e6e-2a5ea470c250] 2026-01-28 00:03:50.316148 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=ab69f658-ba70-43a3-813a-675890ce81e2/73ccfbcc-79e9-4762-9da0-bda867b64772] 2026-01-28 00:03:50.704012 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-01-28 00:04:00.704756 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-01-28 00:04:01.119832 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=f743a329-dcdc-448f-b86d-e97812ae22d2] 2026-01-28 00:04:01.274829 | orchestrator | 2026-01-28 00:04:01.274907 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-01-28 00:04:01.274921 | orchestrator | 2026-01-28 00:04:01.274933 | orchestrator | Outputs: 2026-01-28 00:04:01.274943 | orchestrator | 2026-01-28 00:04:01.274954 | orchestrator | manager_address = 2026-01-28 00:04:01.274964 | orchestrator | private_key = 2026-01-28 00:04:01.768512 | orchestrator | ok: Runtime: 0:01:42.825952 2026-01-28 00:04:01.805194 | 2026-01-28 00:04:01.805349 | TASK [Fetch manager address] 2026-01-28 00:04:02.253874 | orchestrator | ok 2026-01-28 00:04:02.263280 | 2026-01-28 00:04:02.263410 | TASK [Set manager_host address] 2026-01-28 00:04:02.344033 | orchestrator | ok 2026-01-28 00:04:02.355792 | 2026-01-28 00:04:02.355939 | LOOP [Update ansible collections] 2026-01-28 00:04:11.127087 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-28 00:04:11.127435 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-28 00:04:11.127490 | orchestrator | Starting galaxy collection install process 2026-01-28 00:04:11.127527 | orchestrator | Process install dependency map 2026-01-28 00:04:11.127609 | orchestrator | Starting collection install process 2026-01-28 00:04:11.127644 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2026-01-28 00:04:11.127679 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2026-01-28 00:04:11.127720 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-01-28 00:04:11.127798 | orchestrator | ok: Item: commons Runtime: 0:00:08.458480 2026-01-28 00:04:13.073916 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-01-28 00:04:13.074191 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-28 00:04:13.074262 | orchestrator | Starting galaxy collection install process 2026-01-28 00:04:13.074305 | orchestrator | Process install dependency map 2026-01-28 00:04:13.074343 | orchestrator | Starting collection install process 2026-01-28 00:04:13.074437 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2026-01-28 00:04:13.074479 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2026-01-28 00:04:13.074515 | orchestrator | osism.services:999.0.0 was installed successfully 2026-01-28 00:04:13.074629 | orchestrator | ok: Item: services Runtime: 0:00:01.665896 2026-01-28 00:04:13.094299 | 2026-01-28 00:04:13.094472 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-28 00:04:23.721571 | orchestrator | ok 2026-01-28 00:04:23.729291 | 2026-01-28 00:04:23.729396 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-28 00:05:23.778665 | orchestrator | ok 2026-01-28 00:05:23.787327 | 2026-01-28 00:05:23.787439 | TASK [Fetch manager ssh hostkey] 2026-01-28 00:05:25.357896 | orchestrator | Output suppressed because no_log was given 2026-01-28 00:05:25.379807 | 2026-01-28 00:05:25.380038 | TASK [Get ssh keypair from terraform environment] 2026-01-28 00:05:25.940671 | orchestrator | ok: Runtime: 0:00:00.009470 2026-01-28 00:05:25.956817 | 2026-01-28 00:05:25.956993 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-28 00:05:25.992760 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-01-28 00:05:26.004045 | 2026-01-28 00:05:26.004205 | TASK [Run manager part 0] 2026-01-28 00:05:27.098060 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-28 00:05:27.147668 | orchestrator | 2026-01-28 00:05:27.147751 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-01-28 00:05:27.147763 | orchestrator | 2026-01-28 00:05:27.147783 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-01-28 00:05:29.088148 | orchestrator | ok: [testbed-manager] 2026-01-28 00:05:29.088235 | orchestrator | 2026-01-28 00:05:29.088276 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-28 00:05:29.088295 | orchestrator | 2026-01-28 00:05:29.088312 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-28 00:05:31.034575 | orchestrator | ok: [testbed-manager] 2026-01-28 00:05:31.034631 | orchestrator | 2026-01-28 00:05:31.034644 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-28 00:05:31.735591 | orchestrator | ok: [testbed-manager] 2026-01-28 00:05:31.735634 | orchestrator | 2026-01-28 00:05:31.735640 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-28 00:05:31.784213 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:05:31.784259 | orchestrator | 2026-01-28 00:05:31.784268 | orchestrator | TASK [Update package cache] **************************************************** 2026-01-28 00:05:31.822108 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:05:31.822151 | orchestrator | 2026-01-28 00:05:31.822158 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-28 00:05:31.847392 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:05:31.847453 | orchestrator | 2026-01-28 00:05:31.847464 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-28 00:05:31.877297 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:05:31.877366 | orchestrator | 2026-01-28 00:05:31.877378 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-28 00:05:31.907566 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:05:31.907641 | orchestrator | 2026-01-28 00:05:31.907654 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-01-28 00:05:31.940409 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:05:31.940465 | orchestrator | 2026-01-28 00:05:31.940475 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-01-28 00:05:31.972745 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:05:31.972780 | orchestrator | 2026-01-28 00:05:31.972788 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-01-28 00:05:32.738485 | orchestrator | changed: [testbed-manager] 2026-01-28 00:05:32.738520 | orchestrator | 2026-01-28 00:05:32.738526 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-01-28 00:08:22.489597 | orchestrator | changed: [testbed-manager] 2026-01-28 00:08:22.489648 | orchestrator | 2026-01-28 00:08:22.489660 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-28 00:09:42.572669 | orchestrator | changed: [testbed-manager] 2026-01-28 00:09:42.572707 | orchestrator | 2026-01-28 00:09:42.572713 | orchestrator | TASK [Install required packages] *********************************************** 2026-01-28 00:10:04.804599 | orchestrator | changed: [testbed-manager] 2026-01-28 00:10:04.804695 | orchestrator | 2026-01-28 00:10:04.804715 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-01-28 00:10:14.424617 | orchestrator | changed: [testbed-manager] 2026-01-28 00:10:14.424711 | orchestrator | 2026-01-28 00:10:14.424728 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-28 00:10:14.476432 | orchestrator | ok: [testbed-manager] 2026-01-28 00:10:14.476527 | orchestrator | 2026-01-28 00:10:14.476550 | orchestrator | TASK [Get current user] ******************************************************** 2026-01-28 00:10:15.563651 | orchestrator | ok: [testbed-manager] 2026-01-28 00:10:15.563741 | orchestrator | 2026-01-28 00:10:15.563759 | orchestrator | TASK [Create venv directory] *************************************************** 2026-01-28 00:10:16.329579 | orchestrator | changed: [testbed-manager] 2026-01-28 00:10:16.329786 | orchestrator | 2026-01-28 00:10:16.329817 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-01-28 00:10:22.802779 | orchestrator | changed: [testbed-manager] 2026-01-28 00:10:22.802873 | orchestrator | 2026-01-28 00:10:22.802926 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-01-28 00:10:28.949346 | orchestrator | changed: [testbed-manager] 2026-01-28 00:10:28.949405 | orchestrator | 2026-01-28 00:10:28.949420 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-01-28 00:10:31.768897 | orchestrator | changed: [testbed-manager] 2026-01-28 00:10:31.769006 | orchestrator | 2026-01-28 00:10:31.769020 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-01-28 00:10:33.635848 | orchestrator | changed: [testbed-manager] 2026-01-28 00:10:33.635900 | orchestrator | 2026-01-28 00:10:33.635910 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-01-28 00:10:34.764287 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-28 00:10:34.764346 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-28 00:10:34.764360 | orchestrator | 2026-01-28 00:10:34.764371 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-01-28 00:10:34.817443 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-28 00:10:34.817522 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-28 00:10:34.817535 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-28 00:10:34.817546 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-28 00:10:44.522714 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-01-28 00:10:44.522756 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-01-28 00:10:44.522761 | orchestrator | 2026-01-28 00:10:44.522766 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-01-28 00:10:45.101727 | orchestrator | changed: [testbed-manager] 2026-01-28 00:10:45.101837 | orchestrator | 2026-01-28 00:10:45.101859 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-01-28 00:14:05.622711 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-01-28 00:14:05.622826 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-01-28 00:14:05.622845 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-01-28 00:14:05.622858 | orchestrator | 2026-01-28 00:14:05.622871 | orchestrator | TASK [Install local collections] *********************************************** 2026-01-28 00:14:07.960048 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-01-28 00:14:07.960184 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-01-28 00:14:07.960202 | orchestrator | 2026-01-28 00:14:07.960215 | orchestrator | PLAY [Create operator user] **************************************************** 2026-01-28 00:14:07.960228 | orchestrator | 2026-01-28 00:14:07.960239 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-28 00:14:09.342705 | orchestrator | ok: [testbed-manager] 2026-01-28 00:14:09.342925 | orchestrator | 2026-01-28 00:14:09.342949 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-28 00:14:09.389879 | orchestrator | ok: [testbed-manager] 2026-01-28 00:14:09.389966 | orchestrator | 2026-01-28 00:14:09.389982 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-28 00:14:09.458001 | orchestrator | ok: [testbed-manager] 2026-01-28 00:14:09.458148 | orchestrator | 2026-01-28 00:14:09.458171 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-28 00:14:10.264607 | orchestrator | changed: [testbed-manager] 2026-01-28 00:14:10.264688 | orchestrator | 2026-01-28 00:14:10.264703 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-28 00:14:11.011045 | orchestrator | changed: [testbed-manager] 2026-01-28 00:14:11.011178 | orchestrator | 2026-01-28 00:14:11.011193 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-28 00:14:12.423610 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-01-28 00:14:12.423678 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-01-28 00:14:12.423687 | orchestrator | 2026-01-28 00:14:12.423708 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-28 00:14:13.849607 | orchestrator | changed: [testbed-manager] 2026-01-28 00:14:13.849724 | orchestrator | 2026-01-28 00:14:13.849741 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-28 00:14:15.743835 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-01-28 00:14:15.744050 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-01-28 00:14:15.744068 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-01-28 00:14:15.744108 | orchestrator | 2026-01-28 00:14:15.744121 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-28 00:14:15.803495 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:14:15.803616 | orchestrator | 2026-01-28 00:14:15.803632 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-28 00:14:15.888238 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:14:15.888326 | orchestrator | 2026-01-28 00:14:15.888344 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-28 00:14:16.453329 | orchestrator | changed: [testbed-manager] 2026-01-28 00:14:16.453421 | orchestrator | 2026-01-28 00:14:16.453437 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-28 00:14:16.528412 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:14:16.528502 | orchestrator | 2026-01-28 00:14:16.528517 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-28 00:14:17.441310 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-28 00:14:17.441396 | orchestrator | changed: [testbed-manager] 2026-01-28 00:14:17.441412 | orchestrator | 2026-01-28 00:14:17.441425 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-28 00:14:17.482791 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:14:17.482873 | orchestrator | 2026-01-28 00:14:17.482890 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-28 00:14:17.522219 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:14:17.522301 | orchestrator | 2026-01-28 00:14:17.522315 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-28 00:14:17.555940 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:14:17.556045 | orchestrator | 2026-01-28 00:14:17.556071 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-28 00:14:17.628509 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:14:17.628611 | orchestrator | 2026-01-28 00:14:17.628636 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-28 00:14:18.393288 | orchestrator | ok: [testbed-manager] 2026-01-28 00:14:18.393375 | orchestrator | 2026-01-28 00:14:18.393392 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-01-28 00:14:18.393405 | orchestrator | 2026-01-28 00:14:18.393416 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-28 00:14:19.880927 | orchestrator | ok: [testbed-manager] 2026-01-28 00:14:19.881018 | orchestrator | 2026-01-28 00:14:19.881033 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-01-28 00:14:20.881907 | orchestrator | changed: [testbed-manager] 2026-01-28 00:14:20.881992 | orchestrator | 2026-01-28 00:14:20.882006 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:14:20.882098 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-01-28 00:14:20.882115 | orchestrator | 2026-01-28 00:14:21.410313 | orchestrator | ok: Runtime: 0:08:54.667685 2026-01-28 00:14:21.427623 | 2026-01-28 00:14:21.427787 | TASK [Point out that the log in on the manager is now possible] 2026-01-28 00:14:21.476648 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-01-28 00:14:21.486745 | 2026-01-28 00:14:21.486904 | TASK [Point out that the following task takes some time and does not give any output] 2026-01-28 00:14:21.535060 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-01-28 00:14:21.544911 | 2026-01-28 00:14:21.545065 | TASK [Run manager part 1 + 2] 2026-01-28 00:14:23.797580 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-01-28 00:14:23.862400 | orchestrator | 2026-01-28 00:14:23.862451 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-01-28 00:14:23.862459 | orchestrator | 2026-01-28 00:14:23.862472 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-28 00:14:26.973620 | orchestrator | ok: [testbed-manager] 2026-01-28 00:14:26.973674 | orchestrator | 2026-01-28 00:14:26.973695 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-01-28 00:14:27.019218 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:14:27.019290 | orchestrator | 2026-01-28 00:14:27.019308 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-01-28 00:14:27.060737 | orchestrator | ok: [testbed-manager] 2026-01-28 00:14:27.060786 | orchestrator | 2026-01-28 00:14:27.060796 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-28 00:14:27.102388 | orchestrator | ok: [testbed-manager] 2026-01-28 00:14:27.102452 | orchestrator | 2026-01-28 00:14:27.102465 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-28 00:14:27.168778 | orchestrator | ok: [testbed-manager] 2026-01-28 00:14:27.168858 | orchestrator | 2026-01-28 00:14:27.168872 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-28 00:14:27.232765 | orchestrator | ok: [testbed-manager] 2026-01-28 00:14:27.233461 | orchestrator | 2026-01-28 00:14:27.233486 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-28 00:14:27.289446 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-01-28 00:14:27.289499 | orchestrator | 2026-01-28 00:14:27.289505 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-28 00:14:28.045751 | orchestrator | ok: [testbed-manager] 2026-01-28 00:14:28.045813 | orchestrator | 2026-01-28 00:14:28.045822 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-28 00:14:28.091463 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:14:28.091517 | orchestrator | 2026-01-28 00:14:28.091526 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-28 00:14:29.522865 | orchestrator | changed: [testbed-manager] 2026-01-28 00:14:29.522922 | orchestrator | 2026-01-28 00:14:29.522933 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-28 00:14:30.110667 | orchestrator | ok: [testbed-manager] 2026-01-28 00:14:30.110725 | orchestrator | 2026-01-28 00:14:30.110734 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-28 00:14:32.255946 | orchestrator | changed: [testbed-manager] 2026-01-28 00:14:32.256549 | orchestrator | 2026-01-28 00:14:32.256584 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-28 00:14:48.270744 | orchestrator | changed: [testbed-manager] 2026-01-28 00:14:48.270824 | orchestrator | 2026-01-28 00:14:48.270841 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-01-28 00:14:48.946823 | orchestrator | ok: [testbed-manager] 2026-01-28 00:14:48.946908 | orchestrator | 2026-01-28 00:14:48.946934 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-01-28 00:14:49.000229 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:14:49.000294 | orchestrator | 2026-01-28 00:14:49.000309 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-01-28 00:14:49.954667 | orchestrator | changed: [testbed-manager] 2026-01-28 00:14:49.955442 | orchestrator | 2026-01-28 00:14:49.955476 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-01-28 00:14:50.961945 | orchestrator | changed: [testbed-manager] 2026-01-28 00:14:50.962000 | orchestrator | 2026-01-28 00:14:50.962013 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-01-28 00:14:51.555710 | orchestrator | changed: [testbed-manager] 2026-01-28 00:14:51.555751 | orchestrator | 2026-01-28 00:14:51.555759 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-01-28 00:14:51.599186 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-01-28 00:14:51.599274 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-01-28 00:14:51.599287 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-01-28 00:14:51.599297 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-01-28 00:14:55.924879 | orchestrator | changed: [testbed-manager] 2026-01-28 00:14:55.924925 | orchestrator | 2026-01-28 00:14:55.924933 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-01-28 00:15:05.150466 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-01-28 00:15:05.150611 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-01-28 00:15:05.150625 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-01-28 00:15:05.150635 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-01-28 00:15:05.150652 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-01-28 00:15:05.150661 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-01-28 00:15:05.150669 | orchestrator | 2026-01-28 00:15:05.150678 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-01-28 00:15:06.282610 | orchestrator | changed: [testbed-manager] 2026-01-28 00:15:06.282710 | orchestrator | 2026-01-28 00:15:06.282727 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-01-28 00:15:06.328737 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:15:06.328827 | orchestrator | 2026-01-28 00:15:06.328844 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-01-28 00:15:09.612162 | orchestrator | changed: [testbed-manager] 2026-01-28 00:15:09.612261 | orchestrator | 2026-01-28 00:15:09.612276 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-01-28 00:15:09.658990 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:15:09.659095 | orchestrator | 2026-01-28 00:15:09.659154 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-01-28 00:16:56.000912 | orchestrator | changed: [testbed-manager] 2026-01-28 00:16:56.001015 | orchestrator | 2026-01-28 00:16:56.001037 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-28 00:16:57.277616 | orchestrator | ok: [testbed-manager] 2026-01-28 00:16:57.278283 | orchestrator | 2026-01-28 00:16:57.278317 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:16:57.278331 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-01-28 00:16:57.278343 | orchestrator | 2026-01-28 00:16:57.693682 | orchestrator | ok: Runtime: 0:02:35.525738 2026-01-28 00:16:57.710906 | 2026-01-28 00:16:57.711055 | TASK [Reboot manager] 2026-01-28 00:16:59.250975 | orchestrator | ok: Runtime: 0:00:01.022600 2026-01-28 00:16:59.268294 | 2026-01-28 00:16:59.268462 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-01-28 00:17:13.421197 | orchestrator | ok 2026-01-28 00:17:13.431948 | 2026-01-28 00:17:13.432089 | TASK [Wait a little longer for the manager so that everything is ready] 2026-01-28 00:18:13.475530 | orchestrator | ok 2026-01-28 00:18:13.485108 | 2026-01-28 00:18:13.485249 | TASK [Deploy manager + bootstrap nodes] 2026-01-28 00:18:16.127093 | orchestrator | 2026-01-28 00:18:16.127436 | orchestrator | # DEPLOY MANAGER 2026-01-28 00:18:16.127465 | orchestrator | 2026-01-28 00:18:16.127480 | orchestrator | + set -e 2026-01-28 00:18:16.127494 | orchestrator | + echo 2026-01-28 00:18:16.127513 | orchestrator | + echo '# DEPLOY MANAGER' 2026-01-28 00:18:16.127540 | orchestrator | + echo 2026-01-28 00:18:16.127606 | orchestrator | + cat /opt/manager-vars.sh 2026-01-28 00:18:16.130650 | orchestrator | export NUMBER_OF_NODES=6 2026-01-28 00:18:16.130699 | orchestrator | 2026-01-28 00:18:16.130712 | orchestrator | export CEPH_VERSION=reef 2026-01-28 00:18:16.130734 | orchestrator | export CONFIGURATION_VERSION=main 2026-01-28 00:18:16.130755 | orchestrator | export MANAGER_VERSION=9.5.0 2026-01-28 00:18:16.130791 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-01-28 00:18:16.130812 | orchestrator | 2026-01-28 00:18:16.130835 | orchestrator | export ARA=false 2026-01-28 00:18:16.130848 | orchestrator | export DEPLOY_MODE=manager 2026-01-28 00:18:16.130865 | orchestrator | export TEMPEST=true 2026-01-28 00:18:16.130877 | orchestrator | export IS_ZUUL=true 2026-01-28 00:18:16.130894 | orchestrator | 2026-01-28 00:18:16.130922 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-01-28 00:18:16.130945 | orchestrator | export EXTERNAL_API=false 2026-01-28 00:18:16.130963 | orchestrator | 2026-01-28 00:18:16.130981 | orchestrator | export IMAGE_USER=ubuntu 2026-01-28 00:18:16.131004 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-01-28 00:18:16.131024 | orchestrator | 2026-01-28 00:18:16.131043 | orchestrator | export CEPH_STACK=ceph-ansible 2026-01-28 00:18:16.131075 | orchestrator | 2026-01-28 00:18:16.131094 | orchestrator | + echo 2026-01-28 00:18:16.131116 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-28 00:18:16.132024 | orchestrator | ++ export INTERACTIVE=false 2026-01-28 00:18:16.132058 | orchestrator | ++ INTERACTIVE=false 2026-01-28 00:18:16.132080 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-28 00:18:16.132098 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-28 00:18:16.132486 | orchestrator | + source /opt/manager-vars.sh 2026-01-28 00:18:16.132519 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-28 00:18:16.132531 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-28 00:18:16.132542 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-28 00:18:16.132552 | orchestrator | ++ CEPH_VERSION=reef 2026-01-28 00:18:16.132569 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-28 00:18:16.132588 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-28 00:18:16.132608 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-28 00:18:16.132635 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-28 00:18:16.132655 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-28 00:18:16.132689 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-28 00:18:16.132710 | orchestrator | ++ export ARA=false 2026-01-28 00:18:16.132753 | orchestrator | ++ ARA=false 2026-01-28 00:18:16.132765 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-28 00:18:16.132776 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-28 00:18:16.132787 | orchestrator | ++ export TEMPEST=true 2026-01-28 00:18:16.132798 | orchestrator | ++ TEMPEST=true 2026-01-28 00:18:16.132809 | orchestrator | ++ export IS_ZUUL=true 2026-01-28 00:18:16.132820 | orchestrator | ++ IS_ZUUL=true 2026-01-28 00:18:16.132831 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-01-28 00:18:16.132842 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-01-28 00:18:16.132858 | orchestrator | ++ export EXTERNAL_API=false 2026-01-28 00:18:16.132870 | orchestrator | ++ EXTERNAL_API=false 2026-01-28 00:18:16.132881 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-28 00:18:16.132891 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-28 00:18:16.132902 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-28 00:18:16.132914 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-28 00:18:16.132925 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-28 00:18:16.132936 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-28 00:18:16.132947 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-01-28 00:18:16.188691 | orchestrator | + docker version 2026-01-28 00:18:16.475120 | orchestrator | Client: Docker Engine - Community 2026-01-28 00:18:16.475317 | orchestrator | Version: 27.5.1 2026-01-28 00:18:16.475347 | orchestrator | API version: 1.47 2026-01-28 00:18:16.475364 | orchestrator | Go version: go1.22.11 2026-01-28 00:18:16.475375 | orchestrator | Git commit: 9f9e405 2026-01-28 00:18:16.475386 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-28 00:18:16.475399 | orchestrator | OS/Arch: linux/amd64 2026-01-28 00:18:16.475410 | orchestrator | Context: default 2026-01-28 00:18:16.475421 | orchestrator | 2026-01-28 00:18:16.475433 | orchestrator | Server: Docker Engine - Community 2026-01-28 00:18:16.475444 | orchestrator | Engine: 2026-01-28 00:18:16.475455 | orchestrator | Version: 27.5.1 2026-01-28 00:18:16.475467 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-01-28 00:18:16.475510 | orchestrator | Go version: go1.22.11 2026-01-28 00:18:16.475522 | orchestrator | Git commit: 4c9b3b0 2026-01-28 00:18:16.475533 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-01-28 00:18:16.475544 | orchestrator | OS/Arch: linux/amd64 2026-01-28 00:18:16.475555 | orchestrator | Experimental: false 2026-01-28 00:18:16.475565 | orchestrator | containerd: 2026-01-28 00:18:16.475576 | orchestrator | Version: v2.2.1 2026-01-28 00:18:16.475587 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-01-28 00:18:16.475598 | orchestrator | runc: 2026-01-28 00:18:16.475609 | orchestrator | Version: 1.3.4 2026-01-28 00:18:16.475620 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-01-28 00:18:16.475631 | orchestrator | docker-init: 2026-01-28 00:18:16.475642 | orchestrator | Version: 0.19.0 2026-01-28 00:18:16.475654 | orchestrator | GitCommit: de40ad0 2026-01-28 00:18:16.479003 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-01-28 00:18:16.488766 | orchestrator | + set -e 2026-01-28 00:18:16.488865 | orchestrator | + source /opt/manager-vars.sh 2026-01-28 00:18:16.488880 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-28 00:18:16.488892 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-28 00:18:16.488901 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-28 00:18:16.488910 | orchestrator | ++ CEPH_VERSION=reef 2026-01-28 00:18:16.488919 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-28 00:18:16.488931 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-28 00:18:16.488941 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-28 00:18:16.488950 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-28 00:18:16.488959 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-28 00:18:16.488967 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-28 00:18:16.488976 | orchestrator | ++ export ARA=false 2026-01-28 00:18:16.488985 | orchestrator | ++ ARA=false 2026-01-28 00:18:16.488994 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-28 00:18:16.489004 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-28 00:18:16.489013 | orchestrator | ++ export TEMPEST=true 2026-01-28 00:18:16.489021 | orchestrator | ++ TEMPEST=true 2026-01-28 00:18:16.489030 | orchestrator | ++ export IS_ZUUL=true 2026-01-28 00:18:16.489039 | orchestrator | ++ IS_ZUUL=true 2026-01-28 00:18:16.489048 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-01-28 00:18:16.489057 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-01-28 00:18:16.489066 | orchestrator | ++ export EXTERNAL_API=false 2026-01-28 00:18:16.489075 | orchestrator | ++ EXTERNAL_API=false 2026-01-28 00:18:16.489083 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-28 00:18:16.489092 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-28 00:18:16.489101 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-28 00:18:16.489109 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-28 00:18:16.489118 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-28 00:18:16.489145 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-28 00:18:16.489155 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-28 00:18:16.489182 | orchestrator | ++ export INTERACTIVE=false 2026-01-28 00:18:16.489191 | orchestrator | ++ INTERACTIVE=false 2026-01-28 00:18:16.489200 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-28 00:18:16.489213 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-28 00:18:16.489349 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-01-28 00:18:16.489370 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-01-28 00:18:16.496996 | orchestrator | + set -e 2026-01-28 00:18:16.497015 | orchestrator | + VERSION=9.5.0 2026-01-28 00:18:16.497027 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-01-28 00:18:16.504540 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-01-28 00:18:16.504563 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-01-28 00:18:16.508988 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-01-28 00:18:16.512600 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-01-28 00:18:16.521425 | orchestrator | /opt/configuration ~ 2026-01-28 00:18:16.521457 | orchestrator | + set -e 2026-01-28 00:18:16.521468 | orchestrator | + pushd /opt/configuration 2026-01-28 00:18:16.521479 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-28 00:18:16.523385 | orchestrator | + source /opt/venv/bin/activate 2026-01-28 00:18:16.524622 | orchestrator | ++ deactivate nondestructive 2026-01-28 00:18:16.524639 | orchestrator | ++ '[' -n '' ']' 2026-01-28 00:18:16.524652 | orchestrator | ++ '[' -n '' ']' 2026-01-28 00:18:16.524687 | orchestrator | ++ hash -r 2026-01-28 00:18:16.524697 | orchestrator | ++ '[' -n '' ']' 2026-01-28 00:18:16.524714 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-28 00:18:16.524724 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-28 00:18:16.524734 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-28 00:18:16.524748 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-28 00:18:16.524758 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-28 00:18:16.524768 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-28 00:18:16.524778 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-28 00:18:16.524788 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-28 00:18:16.524799 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-28 00:18:16.524809 | orchestrator | ++ export PATH 2026-01-28 00:18:16.524876 | orchestrator | ++ '[' -n '' ']' 2026-01-28 00:18:16.524889 | orchestrator | ++ '[' -z '' ']' 2026-01-28 00:18:16.524899 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-28 00:18:16.524908 | orchestrator | ++ PS1='(venv) ' 2026-01-28 00:18:16.524918 | orchestrator | ++ export PS1 2026-01-28 00:18:16.525025 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-28 00:18:16.525040 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-28 00:18:16.525050 | orchestrator | ++ hash -r 2026-01-28 00:18:16.525060 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-01-28 00:18:17.713319 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-01-28 00:18:17.714957 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-01-28 00:18:17.716716 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-01-28 00:18:17.718596 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-01-28 00:18:17.720121 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-01-28 00:18:17.731666 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-01-28 00:18:17.732953 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-01-28 00:18:17.733779 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-01-28 00:18:17.735192 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-01-28 00:18:17.768996 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-01-28 00:18:17.770126 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-01-28 00:18:17.771831 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-01-28 00:18:17.773328 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-01-28 00:18:17.777477 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-01-28 00:18:17.980191 | orchestrator | ++ which gilt 2026-01-28 00:18:17.983041 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-01-28 00:18:17.983098 | orchestrator | + /opt/venv/bin/gilt overlay 2026-01-28 00:18:18.218236 | orchestrator | osism.cfg-generics: 2026-01-28 00:18:18.396355 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-01-28 00:18:18.396449 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-01-28 00:18:18.396510 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-01-28 00:18:18.396675 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-01-28 00:18:18.960793 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-01-28 00:18:18.972108 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-01-28 00:18:19.426986 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-01-28 00:18:19.475445 | orchestrator | ~ 2026-01-28 00:18:19.475538 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-28 00:18:19.475555 | orchestrator | + deactivate 2026-01-28 00:18:19.475568 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-28 00:18:19.475582 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-28 00:18:19.475593 | orchestrator | + export PATH 2026-01-28 00:18:19.475628 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-28 00:18:19.475641 | orchestrator | + '[' -n '' ']' 2026-01-28 00:18:19.475655 | orchestrator | + hash -r 2026-01-28 00:18:19.475666 | orchestrator | + '[' -n '' ']' 2026-01-28 00:18:19.475677 | orchestrator | + unset VIRTUAL_ENV 2026-01-28 00:18:19.475688 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-28 00:18:19.475713 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-28 00:18:19.475725 | orchestrator | + unset -f deactivate 2026-01-28 00:18:19.475736 | orchestrator | + popd 2026-01-28 00:18:19.476954 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-01-28 00:18:19.476977 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-01-28 00:18:19.478419 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-28 00:18:19.538920 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-28 00:18:19.539017 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-01-28 00:18:19.539972 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-28 00:18:19.600539 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-28 00:18:19.601508 | orchestrator | ++ semver 2024.2 2025.1 2026-01-28 00:18:19.665501 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-28 00:18:19.665595 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-01-28 00:18:19.762746 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-28 00:18:19.762828 | orchestrator | + source /opt/venv/bin/activate 2026-01-28 00:18:19.762844 | orchestrator | ++ deactivate nondestructive 2026-01-28 00:18:19.762857 | orchestrator | ++ '[' -n '' ']' 2026-01-28 00:18:19.762869 | orchestrator | ++ '[' -n '' ']' 2026-01-28 00:18:19.762880 | orchestrator | ++ hash -r 2026-01-28 00:18:19.762891 | orchestrator | ++ '[' -n '' ']' 2026-01-28 00:18:19.762902 | orchestrator | ++ unset VIRTUAL_ENV 2026-01-28 00:18:19.762913 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-01-28 00:18:19.762924 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-01-28 00:18:19.763102 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-01-28 00:18:19.763132 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-01-28 00:18:19.763151 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-01-28 00:18:19.763242 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-01-28 00:18:19.763256 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-28 00:18:19.763284 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-28 00:18:19.763296 | orchestrator | ++ export PATH 2026-01-28 00:18:19.763308 | orchestrator | ++ '[' -n '' ']' 2026-01-28 00:18:19.763434 | orchestrator | ++ '[' -z '' ']' 2026-01-28 00:18:19.763450 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-01-28 00:18:19.763462 | orchestrator | ++ PS1='(venv) ' 2026-01-28 00:18:19.763473 | orchestrator | ++ export PS1 2026-01-28 00:18:19.763484 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-01-28 00:18:19.763496 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-01-28 00:18:19.763507 | orchestrator | ++ hash -r 2026-01-28 00:18:19.763519 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-01-28 00:18:20.908692 | orchestrator | 2026-01-28 00:18:20.908795 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-01-28 00:18:20.908811 | orchestrator | 2026-01-28 00:18:20.908823 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-28 00:18:21.516793 | orchestrator | ok: [testbed-manager] 2026-01-28 00:18:21.516874 | orchestrator | 2026-01-28 00:18:21.516885 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-28 00:18:22.560220 | orchestrator | changed: [testbed-manager] 2026-01-28 00:18:22.560289 | orchestrator | 2026-01-28 00:18:22.560297 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-01-28 00:18:22.560323 | orchestrator | 2026-01-28 00:18:22.560328 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-28 00:18:26.054354 | orchestrator | ok: [testbed-manager] 2026-01-28 00:18:26.054490 | orchestrator | 2026-01-28 00:18:26.054514 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-01-28 00:18:26.109527 | orchestrator | ok: [testbed-manager] 2026-01-28 00:18:26.109608 | orchestrator | 2026-01-28 00:18:26.109621 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-01-28 00:18:26.571424 | orchestrator | changed: [testbed-manager] 2026-01-28 00:18:26.571519 | orchestrator | 2026-01-28 00:18:26.571536 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-01-28 00:18:26.604502 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:18:26.604573 | orchestrator | 2026-01-28 00:18:26.604584 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-01-28 00:18:26.951740 | orchestrator | changed: [testbed-manager] 2026-01-28 00:18:26.951869 | orchestrator | 2026-01-28 00:18:26.951897 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2026-01-28 00:18:27.017607 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:18:27.017704 | orchestrator | 2026-01-28 00:18:27.017719 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-01-28 00:18:27.365019 | orchestrator | ok: [testbed-manager] 2026-01-28 00:18:27.365112 | orchestrator | 2026-01-28 00:18:27.365127 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-01-28 00:18:27.494784 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:18:27.494901 | orchestrator | 2026-01-28 00:18:27.494928 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-01-28 00:18:27.494941 | orchestrator | 2026-01-28 00:18:27.494953 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-28 00:18:29.266818 | orchestrator | ok: [testbed-manager] 2026-01-28 00:18:29.266918 | orchestrator | 2026-01-28 00:18:29.266936 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-01-28 00:18:29.378568 | orchestrator | included: osism.services.traefik for testbed-manager 2026-01-28 00:18:29.378663 | orchestrator | 2026-01-28 00:18:29.378677 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-01-28 00:18:29.434831 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-01-28 00:18:29.434931 | orchestrator | 2026-01-28 00:18:29.434949 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-01-28 00:18:30.562998 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-01-28 00:18:30.563105 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-01-28 00:18:30.563124 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-01-28 00:18:30.563136 | orchestrator | 2026-01-28 00:18:30.563150 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-01-28 00:18:32.433602 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-01-28 00:18:32.433714 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-01-28 00:18:32.433735 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-01-28 00:18:32.433751 | orchestrator | 2026-01-28 00:18:32.433766 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-01-28 00:18:33.127366 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-28 00:18:33.127455 | orchestrator | changed: [testbed-manager] 2026-01-28 00:18:33.127469 | orchestrator | 2026-01-28 00:18:33.127479 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-01-28 00:18:33.788894 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-28 00:18:33.788983 | orchestrator | changed: [testbed-manager] 2026-01-28 00:18:33.789000 | orchestrator | 2026-01-28 00:18:33.789013 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-01-28 00:18:33.857514 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:18:33.857590 | orchestrator | 2026-01-28 00:18:33.857601 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-01-28 00:18:34.252580 | orchestrator | ok: [testbed-manager] 2026-01-28 00:18:34.252688 | orchestrator | 2026-01-28 00:18:34.252707 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-01-28 00:18:34.322648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-01-28 00:18:34.322706 | orchestrator | 2026-01-28 00:18:34.322714 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-01-28 00:18:35.434240 | orchestrator | changed: [testbed-manager] 2026-01-28 00:18:35.434357 | orchestrator | 2026-01-28 00:18:35.434375 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-01-28 00:18:36.247718 | orchestrator | changed: [testbed-manager] 2026-01-28 00:18:36.247848 | orchestrator | 2026-01-28 00:18:36.247873 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-01-28 00:18:46.403718 | orchestrator | changed: [testbed-manager] 2026-01-28 00:18:46.403832 | orchestrator | 2026-01-28 00:18:46.403870 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-01-28 00:18:46.476236 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:18:46.476329 | orchestrator | 2026-01-28 00:18:46.476345 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-01-28 00:18:46.476358 | orchestrator | 2026-01-28 00:18:46.476369 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-28 00:18:48.397028 | orchestrator | ok: [testbed-manager] 2026-01-28 00:18:48.397130 | orchestrator | 2026-01-28 00:18:48.397153 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-01-28 00:18:48.515701 | orchestrator | included: osism.services.manager for testbed-manager 2026-01-28 00:18:48.515793 | orchestrator | 2026-01-28 00:18:48.515808 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-01-28 00:18:48.577666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-01-28 00:18:48.577762 | orchestrator | 2026-01-28 00:18:48.577778 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-01-28 00:18:51.293730 | orchestrator | ok: [testbed-manager] 2026-01-28 00:18:51.293839 | orchestrator | 2026-01-28 00:18:51.293854 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-01-28 00:18:51.348845 | orchestrator | ok: [testbed-manager] 2026-01-28 00:18:51.348932 | orchestrator | 2026-01-28 00:18:51.348945 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-01-28 00:18:51.464866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-01-28 00:18:51.464973 | orchestrator | 2026-01-28 00:18:51.464997 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-01-28 00:18:54.127364 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-01-28 00:18:54.127479 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-01-28 00:18:54.127497 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-01-28 00:18:54.127510 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-01-28 00:18:54.127522 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-01-28 00:18:54.127535 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-01-28 00:18:54.127546 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-01-28 00:18:54.127557 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-01-28 00:18:54.127569 | orchestrator | 2026-01-28 00:18:54.127583 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-01-28 00:18:54.721518 | orchestrator | changed: [testbed-manager] 2026-01-28 00:18:54.721634 | orchestrator | 2026-01-28 00:18:54.721651 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-01-28 00:18:55.345328 | orchestrator | changed: [testbed-manager] 2026-01-28 00:18:55.345406 | orchestrator | 2026-01-28 00:18:55.345416 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-01-28 00:18:55.423680 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-01-28 00:18:55.423803 | orchestrator | 2026-01-28 00:18:55.423819 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-01-28 00:18:56.662377 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-01-28 00:18:56.662485 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-01-28 00:18:56.662505 | orchestrator | 2026-01-28 00:18:56.662519 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-01-28 00:18:57.277910 | orchestrator | changed: [testbed-manager] 2026-01-28 00:18:57.278000 | orchestrator | 2026-01-28 00:18:57.278014 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-01-28 00:18:57.334829 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:18:57.334922 | orchestrator | 2026-01-28 00:18:57.334938 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-01-28 00:18:57.412925 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-01-28 00:18:57.413026 | orchestrator | 2026-01-28 00:18:57.413042 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-01-28 00:18:57.990488 | orchestrator | changed: [testbed-manager] 2026-01-28 00:18:57.990598 | orchestrator | 2026-01-28 00:18:57.990618 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-01-28 00:18:58.060688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-01-28 00:18:58.060790 | orchestrator | 2026-01-28 00:18:58.060806 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-01-28 00:18:59.290271 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-28 00:18:59.290375 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-28 00:18:59.290393 | orchestrator | changed: [testbed-manager] 2026-01-28 00:18:59.290407 | orchestrator | 2026-01-28 00:18:59.290420 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-01-28 00:18:59.882826 | orchestrator | changed: [testbed-manager] 2026-01-28 00:18:59.882943 | orchestrator | 2026-01-28 00:18:59.882967 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-01-28 00:18:59.926965 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:18:59.927082 | orchestrator | 2026-01-28 00:18:59.927140 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-01-28 00:19:00.027840 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-01-28 00:19:00.027918 | orchestrator | 2026-01-28 00:19:00.027928 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-01-28 00:19:00.515611 | orchestrator | changed: [testbed-manager] 2026-01-28 00:19:00.515718 | orchestrator | 2026-01-28 00:19:00.515733 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-01-28 00:19:00.878767 | orchestrator | changed: [testbed-manager] 2026-01-28 00:19:00.878874 | orchestrator | 2026-01-28 00:19:00.878902 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-01-28 00:19:02.132740 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-01-28 00:19:02.132872 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-01-28 00:19:02.132891 | orchestrator | 2026-01-28 00:19:02.132915 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-01-28 00:19:02.809798 | orchestrator | changed: [testbed-manager] 2026-01-28 00:19:02.809900 | orchestrator | 2026-01-28 00:19:02.809919 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-01-28 00:19:03.191154 | orchestrator | ok: [testbed-manager] 2026-01-28 00:19:03.191281 | orchestrator | 2026-01-28 00:19:03.191299 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-01-28 00:19:03.533045 | orchestrator | changed: [testbed-manager] 2026-01-28 00:19:03.533144 | orchestrator | 2026-01-28 00:19:03.533161 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-01-28 00:19:03.580897 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:19:03.581006 | orchestrator | 2026-01-28 00:19:03.581018 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-01-28 00:19:03.646077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-01-28 00:19:03.646238 | orchestrator | 2026-01-28 00:19:03.646258 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-01-28 00:19:03.684851 | orchestrator | ok: [testbed-manager] 2026-01-28 00:19:03.684930 | orchestrator | 2026-01-28 00:19:03.684945 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-01-28 00:19:05.526622 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-01-28 00:19:05.526727 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-01-28 00:19:05.526745 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-01-28 00:19:05.526758 | orchestrator | 2026-01-28 00:19:05.526770 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-01-28 00:19:06.187220 | orchestrator | changed: [testbed-manager] 2026-01-28 00:19:06.187332 | orchestrator | 2026-01-28 00:19:06.187348 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-01-28 00:19:06.864698 | orchestrator | changed: [testbed-manager] 2026-01-28 00:19:06.864800 | orchestrator | 2026-01-28 00:19:06.864816 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-01-28 00:19:07.589532 | orchestrator | changed: [testbed-manager] 2026-01-28 00:19:07.589643 | orchestrator | 2026-01-28 00:19:07.589662 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-01-28 00:19:07.660657 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-01-28 00:19:07.660758 | orchestrator | 2026-01-28 00:19:07.660774 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-01-28 00:19:07.704203 | orchestrator | ok: [testbed-manager] 2026-01-28 00:19:07.704316 | orchestrator | 2026-01-28 00:19:07.704342 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-01-28 00:19:08.389862 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-01-28 00:19:08.389967 | orchestrator | 2026-01-28 00:19:08.389984 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-01-28 00:19:08.461080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-01-28 00:19:08.461219 | orchestrator | 2026-01-28 00:19:08.461236 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-01-28 00:19:09.096015 | orchestrator | changed: [testbed-manager] 2026-01-28 00:19:09.096085 | orchestrator | 2026-01-28 00:19:09.096092 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-01-28 00:19:09.643024 | orchestrator | ok: [testbed-manager] 2026-01-28 00:19:09.643144 | orchestrator | 2026-01-28 00:19:09.643170 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-01-28 00:19:09.694461 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:19:09.694555 | orchestrator | 2026-01-28 00:19:09.694571 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-01-28 00:19:09.740496 | orchestrator | ok: [testbed-manager] 2026-01-28 00:19:09.740594 | orchestrator | 2026-01-28 00:19:09.740627 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-01-28 00:19:10.490936 | orchestrator | changed: [testbed-manager] 2026-01-28 00:19:10.491038 | orchestrator | 2026-01-28 00:19:10.491056 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-01-28 00:20:14.798810 | orchestrator | changed: [testbed-manager] 2026-01-28 00:20:14.798938 | orchestrator | 2026-01-28 00:20:14.798960 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-01-28 00:20:15.782837 | orchestrator | ok: [testbed-manager] 2026-01-28 00:20:15.782931 | orchestrator | 2026-01-28 00:20:15.782943 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-01-28 00:20:15.837814 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:20:15.837901 | orchestrator | 2026-01-28 00:20:15.837916 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-01-28 00:20:19.022319 | orchestrator | changed: [testbed-manager] 2026-01-28 00:20:19.022390 | orchestrator | 2026-01-28 00:20:19.022397 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-01-28 00:20:19.115080 | orchestrator | ok: [testbed-manager] 2026-01-28 00:20:19.115137 | orchestrator | 2026-01-28 00:20:19.115143 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-28 00:20:19.115148 | orchestrator | 2026-01-28 00:20:19.115152 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-01-28 00:20:19.172968 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:20:19.173046 | orchestrator | 2026-01-28 00:20:19.173109 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-01-28 00:21:19.227532 | orchestrator | Pausing for 60 seconds 2026-01-28 00:21:19.227613 | orchestrator | changed: [testbed-manager] 2026-01-28 00:21:19.227620 | orchestrator | 2026-01-28 00:21:19.227625 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-01-28 00:21:22.735822 | orchestrator | changed: [testbed-manager] 2026-01-28 00:21:22.735935 | orchestrator | 2026-01-28 00:21:22.735952 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-01-28 00:22:24.714399 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-01-28 00:22:24.714529 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-01-28 00:22:24.714550 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-01-28 00:22:24.714563 | orchestrator | changed: [testbed-manager] 2026-01-28 00:22:24.714577 | orchestrator | 2026-01-28 00:22:24.714590 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-01-28 00:22:34.986452 | orchestrator | changed: [testbed-manager] 2026-01-28 00:22:34.986569 | orchestrator | 2026-01-28 00:22:34.986590 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-01-28 00:22:35.078336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-01-28 00:22:35.078428 | orchestrator | 2026-01-28 00:22:35.078443 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-01-28 00:22:35.078455 | orchestrator | 2026-01-28 00:22:35.078472 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-01-28 00:22:35.124086 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:22:35.124242 | orchestrator | 2026-01-28 00:22:35.124261 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-01-28 00:22:35.190103 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-01-28 00:22:35.190216 | orchestrator | 2026-01-28 00:22:35.190232 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-01-28 00:22:35.981634 | orchestrator | changed: [testbed-manager] 2026-01-28 00:22:35.981747 | orchestrator | 2026-01-28 00:22:35.981769 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-01-28 00:22:39.314849 | orchestrator | ok: [testbed-manager] 2026-01-28 00:22:39.314941 | orchestrator | 2026-01-28 00:22:39.314955 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-01-28 00:22:39.385125 | orchestrator | ok: [testbed-manager] => { 2026-01-28 00:22:39.385248 | orchestrator | "version_check_result.stdout_lines": [ 2026-01-28 00:22:39.385265 | orchestrator | "=== OSISM Container Version Check ===", 2026-01-28 00:22:39.385277 | orchestrator | "Checking running containers against expected versions...", 2026-01-28 00:22:39.385289 | orchestrator | "", 2026-01-28 00:22:39.385302 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-01-28 00:22:39.385314 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-01-28 00:22:39.385326 | orchestrator | " Enabled: true", 2026-01-28 00:22:39.385338 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-01-28 00:22:39.385373 | orchestrator | " Status: ✅ MATCH", 2026-01-28 00:22:39.385385 | orchestrator | "", 2026-01-28 00:22:39.385396 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-01-28 00:22:39.385407 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-01-28 00:22:39.385418 | orchestrator | " Enabled: true", 2026-01-28 00:22:39.385429 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-01-28 00:22:39.385440 | orchestrator | " Status: ✅ MATCH", 2026-01-28 00:22:39.385451 | orchestrator | "", 2026-01-28 00:22:39.385462 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-01-28 00:22:39.385473 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-01-28 00:22:39.385484 | orchestrator | " Enabled: true", 2026-01-28 00:22:39.385495 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-01-28 00:22:39.385506 | orchestrator | " Status: ✅ MATCH", 2026-01-28 00:22:39.385517 | orchestrator | "", 2026-01-28 00:22:39.385528 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-01-28 00:22:39.385539 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-01-28 00:22:39.385550 | orchestrator | " Enabled: true", 2026-01-28 00:22:39.385563 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-01-28 00:22:39.385574 | orchestrator | " Status: ✅ MATCH", 2026-01-28 00:22:39.385584 | orchestrator | "", 2026-01-28 00:22:39.385595 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-01-28 00:22:39.385607 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-01-28 00:22:39.385627 | orchestrator | " Enabled: true", 2026-01-28 00:22:39.385646 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-01-28 00:22:39.385665 | orchestrator | " Status: ✅ MATCH", 2026-01-28 00:22:39.385684 | orchestrator | "", 2026-01-28 00:22:39.385702 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-01-28 00:22:39.385720 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-28 00:22:39.385736 | orchestrator | " Enabled: true", 2026-01-28 00:22:39.385753 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-28 00:22:39.385772 | orchestrator | " Status: ✅ MATCH", 2026-01-28 00:22:39.385790 | orchestrator | "", 2026-01-28 00:22:39.385810 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-01-28 00:22:39.385830 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-28 00:22:39.385842 | orchestrator | " Enabled: true", 2026-01-28 00:22:39.385853 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-01-28 00:22:39.385864 | orchestrator | " Status: ✅ MATCH", 2026-01-28 00:22:39.385875 | orchestrator | "", 2026-01-28 00:22:39.385885 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-01-28 00:22:39.385896 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-28 00:22:39.385907 | orchestrator | " Enabled: true", 2026-01-28 00:22:39.385918 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-01-28 00:22:39.385928 | orchestrator | " Status: ✅ MATCH", 2026-01-28 00:22:39.385939 | orchestrator | "", 2026-01-28 00:22:39.385950 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-01-28 00:22:39.385961 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-01-28 00:22:39.385971 | orchestrator | " Enabled: true", 2026-01-28 00:22:39.385982 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-01-28 00:22:39.385993 | orchestrator | " Status: ✅ MATCH", 2026-01-28 00:22:39.386004 | orchestrator | "", 2026-01-28 00:22:39.386071 | orchestrator | "Checking service: redis (Redis Cache)", 2026-01-28 00:22:39.386086 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-28 00:22:39.386097 | orchestrator | " Enabled: true", 2026-01-28 00:22:39.386108 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-01-28 00:22:39.386130 | orchestrator | " Status: ✅ MATCH", 2026-01-28 00:22:39.386141 | orchestrator | "", 2026-01-28 00:22:39.386152 | orchestrator | "Checking service: api (OSISM API Service)", 2026-01-28 00:22:39.386163 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-28 00:22:39.386195 | orchestrator | " Enabled: true", 2026-01-28 00:22:39.386206 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-28 00:22:39.386217 | orchestrator | " Status: ✅ MATCH", 2026-01-28 00:22:39.386228 | orchestrator | "", 2026-01-28 00:22:39.386239 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-01-28 00:22:39.386250 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-28 00:22:39.386262 | orchestrator | " Enabled: true", 2026-01-28 00:22:39.386273 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-28 00:22:39.386284 | orchestrator | " Status: ✅ MATCH", 2026-01-28 00:22:39.386294 | orchestrator | "", 2026-01-28 00:22:39.386305 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-01-28 00:22:39.386316 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-28 00:22:39.386327 | orchestrator | " Enabled: true", 2026-01-28 00:22:39.386342 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-28 00:22:39.386361 | orchestrator | " Status: ✅ MATCH", 2026-01-28 00:22:39.386388 | orchestrator | "", 2026-01-28 00:22:39.386410 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-01-28 00:22:39.386428 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-28 00:22:39.386446 | orchestrator | " Enabled: true", 2026-01-28 00:22:39.386465 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-28 00:22:39.386510 | orchestrator | " Status: ✅ MATCH", 2026-01-28 00:22:39.386530 | orchestrator | "", 2026-01-28 00:22:39.386562 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-01-28 00:22:39.386583 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-28 00:22:39.386603 | orchestrator | " Enabled: true", 2026-01-28 00:22:39.386622 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-01-28 00:22:39.386641 | orchestrator | " Status: ✅ MATCH", 2026-01-28 00:22:39.386659 | orchestrator | "", 2026-01-28 00:22:39.386678 | orchestrator | "=== Summary ===", 2026-01-28 00:22:39.386698 | orchestrator | "Errors (version mismatches): 0", 2026-01-28 00:22:39.386718 | orchestrator | "Warnings (expected containers not running): 0", 2026-01-28 00:22:39.386738 | orchestrator | "", 2026-01-28 00:22:39.386756 | orchestrator | "✅ All running containers match expected versions!" 2026-01-28 00:22:39.386776 | orchestrator | ] 2026-01-28 00:22:39.386795 | orchestrator | } 2026-01-28 00:22:39.386815 | orchestrator | 2026-01-28 00:22:39.386835 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-01-28 00:22:39.434388 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:22:39.434460 | orchestrator | 2026-01-28 00:22:39.434470 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:22:39.434478 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2026-01-28 00:22:39.434486 | orchestrator | 2026-01-28 00:22:39.533940 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-01-28 00:22:39.534104 | orchestrator | + deactivate 2026-01-28 00:22:39.534127 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-01-28 00:22:39.534141 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-01-28 00:22:39.534152 | orchestrator | + export PATH 2026-01-28 00:22:39.534164 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-01-28 00:22:39.534240 | orchestrator | + '[' -n '' ']' 2026-01-28 00:22:39.534252 | orchestrator | + hash -r 2026-01-28 00:22:39.534263 | orchestrator | + '[' -n '' ']' 2026-01-28 00:22:39.534275 | orchestrator | + unset VIRTUAL_ENV 2026-01-28 00:22:39.534286 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-01-28 00:22:39.534298 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-01-28 00:22:39.534309 | orchestrator | + unset -f deactivate 2026-01-28 00:22:39.534321 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-01-28 00:22:39.542389 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-28 00:22:39.542446 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-28 00:22:39.542459 | orchestrator | + local max_attempts=60 2026-01-28 00:22:39.542471 | orchestrator | + local name=ceph-ansible 2026-01-28 00:22:39.542482 | orchestrator | + local attempt_num=1 2026-01-28 00:22:39.543378 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-28 00:22:39.582929 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-28 00:22:39.583004 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-28 00:22:39.583020 | orchestrator | + local max_attempts=60 2026-01-28 00:22:39.583032 | orchestrator | + local name=kolla-ansible 2026-01-28 00:22:39.583043 | orchestrator | + local attempt_num=1 2026-01-28 00:22:39.584051 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-28 00:22:39.621752 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-28 00:22:39.621834 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-28 00:22:39.621849 | orchestrator | + local max_attempts=60 2026-01-28 00:22:39.621861 | orchestrator | + local name=osism-ansible 2026-01-28 00:22:39.621872 | orchestrator | + local attempt_num=1 2026-01-28 00:22:39.622408 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-28 00:22:39.658141 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-28 00:22:39.658261 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-28 00:22:39.658277 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-28 00:22:40.378111 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-01-28 00:22:40.550466 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-01-28 00:22:40.550586 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-01-28 00:22:40.550608 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-01-28 00:22:40.550628 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-01-28 00:22:40.550649 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-01-28 00:22:40.550696 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-01-28 00:22:40.550717 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-01-28 00:22:40.550736 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-01-28 00:22:40.550754 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-01-28 00:22:40.550773 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-01-28 00:22:40.550792 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-01-28 00:22:40.550811 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-01-28 00:22:40.550860 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-01-28 00:22:40.550882 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-01-28 00:22:40.550936 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-01-28 00:22:40.550955 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-01-28 00:22:40.556869 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-28 00:22:40.608355 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-28 00:22:40.608427 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-01-28 00:22:40.610727 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-01-28 00:22:52.959631 | orchestrator | 2026-01-28 00:22:52 | INFO  | Task 91fdf73a-5be1-40f6-a975-e2442ab48fe9 (resolvconf) was prepared for execution. 2026-01-28 00:22:52.959747 | orchestrator | 2026-01-28 00:22:52 | INFO  | It takes a moment until task 91fdf73a-5be1-40f6-a975-e2442ab48fe9 (resolvconf) has been started and output is visible here. 2026-01-28 00:23:07.325870 | orchestrator | 2026-01-28 00:23:07.325982 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-01-28 00:23:07.325998 | orchestrator | 2026-01-28 00:23:07.326010 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-28 00:23:07.326080 | orchestrator | Wednesday 28 January 2026 00:22:57 +0000 (0:00:00.145) 0:00:00.145 ***** 2026-01-28 00:23:07.326093 | orchestrator | ok: [testbed-manager] 2026-01-28 00:23:07.326105 | orchestrator | 2026-01-28 00:23:07.326117 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-28 00:23:07.326130 | orchestrator | Wednesday 28 January 2026 00:23:00 +0000 (0:00:03.973) 0:00:04.119 ***** 2026-01-28 00:23:07.326141 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:23:07.326153 | orchestrator | 2026-01-28 00:23:07.326164 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-28 00:23:07.326199 | orchestrator | Wednesday 28 January 2026 00:23:01 +0000 (0:00:00.061) 0:00:04.180 ***** 2026-01-28 00:23:07.326211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-01-28 00:23:07.326223 | orchestrator | 2026-01-28 00:23:07.326234 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-28 00:23:07.326245 | orchestrator | Wednesday 28 January 2026 00:23:01 +0000 (0:00:00.075) 0:00:04.256 ***** 2026-01-28 00:23:07.326275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-01-28 00:23:07.326287 | orchestrator | 2026-01-28 00:23:07.326298 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-28 00:23:07.326310 | orchestrator | Wednesday 28 January 2026 00:23:01 +0000 (0:00:00.071) 0:00:04.328 ***** 2026-01-28 00:23:07.326321 | orchestrator | ok: [testbed-manager] 2026-01-28 00:23:07.326332 | orchestrator | 2026-01-28 00:23:07.326343 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-28 00:23:07.326354 | orchestrator | Wednesday 28 January 2026 00:23:02 +0000 (0:00:01.270) 0:00:05.598 ***** 2026-01-28 00:23:07.326365 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:23:07.326376 | orchestrator | 2026-01-28 00:23:07.326387 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-28 00:23:07.326421 | orchestrator | Wednesday 28 January 2026 00:23:02 +0000 (0:00:00.064) 0:00:05.662 ***** 2026-01-28 00:23:07.326435 | orchestrator | ok: [testbed-manager] 2026-01-28 00:23:07.326449 | orchestrator | 2026-01-28 00:23:07.326462 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-28 00:23:07.326474 | orchestrator | Wednesday 28 January 2026 00:23:03 +0000 (0:00:00.510) 0:00:06.173 ***** 2026-01-28 00:23:07.326487 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:23:07.326499 | orchestrator | 2026-01-28 00:23:07.326524 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-28 00:23:07.326538 | orchestrator | Wednesday 28 January 2026 00:23:03 +0000 (0:00:00.086) 0:00:06.260 ***** 2026-01-28 00:23:07.326552 | orchestrator | changed: [testbed-manager] 2026-01-28 00:23:07.326564 | orchestrator | 2026-01-28 00:23:07.326577 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-28 00:23:07.326589 | orchestrator | Wednesday 28 January 2026 00:23:03 +0000 (0:00:00.574) 0:00:06.834 ***** 2026-01-28 00:23:07.326602 | orchestrator | changed: [testbed-manager] 2026-01-28 00:23:07.326614 | orchestrator | 2026-01-28 00:23:07.326627 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-28 00:23:07.326640 | orchestrator | Wednesday 28 January 2026 00:23:04 +0000 (0:00:01.133) 0:00:07.967 ***** 2026-01-28 00:23:07.326652 | orchestrator | ok: [testbed-manager] 2026-01-28 00:23:07.326665 | orchestrator | 2026-01-28 00:23:07.326676 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-28 00:23:07.326689 | orchestrator | Wednesday 28 January 2026 00:23:05 +0000 (0:00:00.992) 0:00:08.960 ***** 2026-01-28 00:23:07.326701 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-01-28 00:23:07.326714 | orchestrator | 2026-01-28 00:23:07.326727 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-28 00:23:07.326738 | orchestrator | Wednesday 28 January 2026 00:23:05 +0000 (0:00:00.081) 0:00:09.042 ***** 2026-01-28 00:23:07.326749 | orchestrator | changed: [testbed-manager] 2026-01-28 00:23:07.326760 | orchestrator | 2026-01-28 00:23:07.326770 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:23:07.326782 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-28 00:23:07.326793 | orchestrator | 2026-01-28 00:23:07.326804 | orchestrator | 2026-01-28 00:23:07.326815 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:23:07.326825 | orchestrator | Wednesday 28 January 2026 00:23:07 +0000 (0:00:01.175) 0:00:10.217 ***** 2026-01-28 00:23:07.326836 | orchestrator | =============================================================================== 2026-01-28 00:23:07.326847 | orchestrator | Gathering Facts --------------------------------------------------------- 3.97s 2026-01-28 00:23:07.326858 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.27s 2026-01-28 00:23:07.326869 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.18s 2026-01-28 00:23:07.326879 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.13s 2026-01-28 00:23:07.326890 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.99s 2026-01-28 00:23:07.326901 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.57s 2026-01-28 00:23:07.326931 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.51s 2026-01-28 00:23:07.326944 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-01-28 00:23:07.326962 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-01-28 00:23:07.326981 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-01-28 00:23:07.327019 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-01-28 00:23:07.327043 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-01-28 00:23:07.327062 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-01-28 00:23:07.621301 | orchestrator | + osism apply sshconfig 2026-01-28 00:23:19.633912 | orchestrator | 2026-01-28 00:23:19 | INFO  | Task f326e7fb-1605-4c4b-8f02-50604421d362 (sshconfig) was prepared for execution. 2026-01-28 00:23:19.634083 | orchestrator | 2026-01-28 00:23:19 | INFO  | It takes a moment until task f326e7fb-1605-4c4b-8f02-50604421d362 (sshconfig) has been started and output is visible here. 2026-01-28 00:23:31.457665 | orchestrator | 2026-01-28 00:23:31.457802 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-01-28 00:23:31.457820 | orchestrator | 2026-01-28 00:23:31.457853 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-01-28 00:23:31.458558 | orchestrator | Wednesday 28 January 2026 00:23:23 +0000 (0:00:00.158) 0:00:00.158 ***** 2026-01-28 00:23:31.458580 | orchestrator | ok: [testbed-manager] 2026-01-28 00:23:31.458595 | orchestrator | 2026-01-28 00:23:31.458607 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-01-28 00:23:31.458618 | orchestrator | Wednesday 28 January 2026 00:23:24 +0000 (0:00:00.535) 0:00:00.694 ***** 2026-01-28 00:23:31.458629 | orchestrator | changed: [testbed-manager] 2026-01-28 00:23:31.458641 | orchestrator | 2026-01-28 00:23:31.458652 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-01-28 00:23:31.458664 | orchestrator | Wednesday 28 January 2026 00:23:24 +0000 (0:00:00.526) 0:00:01.220 ***** 2026-01-28 00:23:31.458675 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-01-28 00:23:31.458686 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-01-28 00:23:31.458697 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-01-28 00:23:31.458708 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-01-28 00:23:31.458719 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-01-28 00:23:31.458730 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-01-28 00:23:31.458740 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-01-28 00:23:31.458751 | orchestrator | 2026-01-28 00:23:31.458762 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-01-28 00:23:31.458773 | orchestrator | Wednesday 28 January 2026 00:23:30 +0000 (0:00:05.732) 0:00:06.952 ***** 2026-01-28 00:23:31.458784 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:23:31.458794 | orchestrator | 2026-01-28 00:23:31.458805 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-01-28 00:23:31.458816 | orchestrator | Wednesday 28 January 2026 00:23:30 +0000 (0:00:00.072) 0:00:07.025 ***** 2026-01-28 00:23:31.458827 | orchestrator | changed: [testbed-manager] 2026-01-28 00:23:31.458838 | orchestrator | 2026-01-28 00:23:31.458849 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:23:31.458861 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-28 00:23:31.458873 | orchestrator | 2026-01-28 00:23:31.458884 | orchestrator | 2026-01-28 00:23:31.458894 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:23:31.458905 | orchestrator | Wednesday 28 January 2026 00:23:31 +0000 (0:00:00.535) 0:00:07.561 ***** 2026-01-28 00:23:31.458916 | orchestrator | =============================================================================== 2026-01-28 00:23:31.458927 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.73s 2026-01-28 00:23:31.458938 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.54s 2026-01-28 00:23:31.458949 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.54s 2026-01-28 00:23:31.458985 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2026-01-28 00:23:31.458997 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-01-28 00:23:31.798378 | orchestrator | + osism apply known-hosts 2026-01-28 00:23:43.903561 | orchestrator | 2026-01-28 00:23:43 | INFO  | Task 61a38883-b3ef-4e36-a855-8b02e797185b (known-hosts) was prepared for execution. 2026-01-28 00:23:43.903681 | orchestrator | 2026-01-28 00:23:43 | INFO  | It takes a moment until task 61a38883-b3ef-4e36-a855-8b02e797185b (known-hosts) has been started and output is visible here. 2026-01-28 00:24:00.574448 | orchestrator | 2026-01-28 00:24:00.574562 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-01-28 00:24:00.574606 | orchestrator | 2026-01-28 00:24:00.574619 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-01-28 00:24:00.574632 | orchestrator | Wednesday 28 January 2026 00:23:48 +0000 (0:00:00.164) 0:00:00.164 ***** 2026-01-28 00:24:00.574644 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-28 00:24:00.574656 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-28 00:24:00.574667 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-28 00:24:00.574678 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-28 00:24:00.574689 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-28 00:24:00.574700 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-28 00:24:00.574711 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-28 00:24:00.574722 | orchestrator | 2026-01-28 00:24:00.574733 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-01-28 00:24:00.574745 | orchestrator | Wednesday 28 January 2026 00:23:53 +0000 (0:00:05.876) 0:00:06.040 ***** 2026-01-28 00:24:00.574757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-28 00:24:00.574770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-28 00:24:00.574781 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-28 00:24:00.574792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-28 00:24:00.574803 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-28 00:24:00.574824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-28 00:24:00.574836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-28 00:24:00.574847 | orchestrator | 2026-01-28 00:24:00.574858 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-28 00:24:00.574869 | orchestrator | Wednesday 28 January 2026 00:23:54 +0000 (0:00:00.155) 0:00:06.195 ***** 2026-01-28 00:24:00.574881 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB060Jo9FFbQ4TjhBg2b076CTzzHQsj+xVrVwOPwl/aXLYeuVQdJ7jxoZXHlv6wtiNiOpC9HScXm48aMaAgRSPw=) 2026-01-28 00:24:00.574903 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC71VikiXKU3F9bzMRa1K1khEyidLf4NU5V9EueA6gwDVcL0/U8NNnaj+No51YKgaj2Yk82L6EeQb/VZhip01JDC45wiSEqfV3SanN/uoMEs1AKyaZVvZ2HderghCPAHfERjxOU5UOg96x7gMSJhWFdDbBAfP4Uevt1Dr7dL+7vLwUhrO+PL3dbj3lAZVT1COunO5nHpBww1zzK0JHrMBwAqZOZpDueSDJBcPYZ3OJ6X/1wi2GaPrVqz3mo50rTY+jVXiYZ4VBM4d1lSKBVH7OS655kOigFvoW18KCWyPJTQvQskEvFv+956IcS+Y81Td2NCdeD6Ref3PjmvWo7GnpSPAkJU57l9c6s5ozxG8oNtLEP5VJgQHezEbkUfQomawusgF96TltzMspTGv7PRgf7a6xl9wE2ZOiNWJM6/garUuulC3DldhTCI7tny92+HKy9q2zg3RrsVqliJekrhMwaSLwfRaJXJOvzLM+a8mgk3bjEH8q1wQ3s6YVvEBNPkcs=) 2026-01-28 00:24:00.574942 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAaOIsc5+rjQSJEJWSELXcOwEGU6jL6ORL64SDK6Yirh) 2026-01-28 00:24:00.574957 | orchestrator | 2026-01-28 00:24:00.574970 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-28 00:24:00.574983 | orchestrator | Wednesday 28 January 2026 00:23:55 +0000 (0:00:01.214) 0:00:07.410 ***** 2026-01-28 00:24:00.574996 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEJ25kGBtN9l7jSv3/DFrZiQftmDO9NYMlZh7EPLqpyW) 2026-01-28 00:24:00.575043 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCil6Toj5jhYCxQPjdcP3oMaORZzeHqq7FSSF+J6l2YV+PXlB92gwZzykpO/nslk2AHlFLUYTeCe30223+wvCW4vfVs2232fpEC0IciyuZhOzvjJdcEUrTwotbtRDuUfdnZk2TdM+SD6prrV7cgOEtL5+7GRrojV7fMwZEIK1xLgf6D33IWqRIAaObdXstm1f97CVBXyxLLRuMgO7bXMYOGB2ZaV1BoAJs51HouInAn2QvaeQ0Zkh91i7pIM+AAhnPeEijrO+jrESIwS72e/MDjQQSEv1U/0NS9nk9xPMKsVWs47Ix+ZB7EknCk6zFmDT+sGlN32+3Qait2Y+bozb7p3VxbsgeKLeaqUnjjB48Qe/QPq3HYK/ktHtbQfrs/GQc3kWVOWrMxexnFKUiEvpnDkNWLraX5DgsP0J99U4TUTPjdJsBn3FgxTV/3ucbw0ahqfFGNnP40v3BRHU4ran+fCaaJ7cuxOZBEQBNX9ddSTc0XUneZe5mRXgLP5nIdIRM=) 2026-01-28 00:24:00.575058 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDDbKI2QhrQnjDDaEGLNcrBlvFeS23ItsJIqUHiOwD+djd9iYTpyNWsXqZzQgnHIFtMRuZsrLO+6udZ1oPvAioA=) 2026-01-28 00:24:00.575071 | orchestrator | 2026-01-28 00:24:00.575084 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-28 00:24:00.575097 | orchestrator | Wednesday 28 January 2026 00:23:56 +0000 (0:00:01.073) 0:00:08.483 ***** 2026-01-28 00:24:00.575110 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0cK5EW5TqAQCYQXE6NhLV9CCjDnWicb0IwDPP9awSjTI20tyhWpiTwLjz4fe5Ackdq0ywlG1mhEjAR6dC/yDbVWGaWMQhBduzepLAzAUNOTujFq5gWA06qNMDBr8rFmsXpDHpRwdum2K0y34VxJ2AveDJfs2lh+UWDvqhIA/MDtysi1zN7zo/W9wEZI3RF/kf6WBo82c+JbjV+w/TyyKqB3hn2S52Tpz+9bMlXNwAcTAYjcdGJXUHQ7f/Z+A7UPHMcj8yR0Kbjzy6S5iYeKbu8JSbrejeWSKFaGr1A2cyCpZ0LqSUs7swdNPKnvfRNuwDel6BaLP+VjPZL5Oy+gXdlrssFTfjwJPiGqiD4Q6JIaKQFjaPjuXOcaYb947LmP23BCsW+xMs0cnaHDhspquH+Yk5ArBm2nhoj81rF6PjfSq6IATEPuLjjurYl0LSBFHh6gjobZxy66+Cl+nCOYxwnsNwq4Z6wiChCIrq6BPr2mxoPHLCGkJ5GEIkORb43Zs=) 2026-01-28 00:24:00.575126 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHO6Ofeck2C9pfjmMROnY3iAe/TGWKPAjY6yzyKeoIob5ZOSVaJyF2L4UwSInkZh72Cbqi/5sLDT57B/uVYnlPs=) 2026-01-28 00:24:00.575145 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJAjbRkBPkyNHlQHRxScsrybTCHLOeQRuGU0yCm52D2n) 2026-01-28 00:24:00.575165 | orchestrator | 2026-01-28 00:24:00.575233 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-28 00:24:00.575251 | orchestrator | Wednesday 28 January 2026 00:23:57 +0000 (0:00:01.074) 0:00:09.558 ***** 2026-01-28 00:24:00.575270 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZO8QbVnKvMihJufEZupf45KqnsPlCfBoGgJ7rJMbNX9vPDkSs389L3gKSlkLcvmI6M//BndmwJz07Tw7OlEHg01YLswbQO0NTP4QSknmOjBKVH9qsHykld8LHbYs+hKEA1wn8CIhZMJ+EDtnltv3ACbNOGJA5ID3Zv027XJd3RqNGP+HfePFZPU1z1N3pDoI0rnYnVwKA2AWU7W+8wv0URYGCy8oydlnnE4g5e7xvurxBy6dpSUGuaqXAjkTMqNzKpTbYGxAV4dlcb2Dnbq0WC3n7i17EO1Px4b32lRZyfDNtMe/g6Hi/k/U9hZ1KRdkb0m9QGHD9MIU8VkBlePsohXWvDwbS/TEQ6Kdc+XqF+icTtvsE0GZyUDVFShYg7SdEpNaqIu4SHDdCsAWDv4/9SJX2wiy+AlLp3MH+1gYIZ1Kx2nRR9aL/dyABjQIRFgatD0a84KfJyPJxJdW8o+U27icNbq2blX15Kp0d+9wjpZLXOSuAURJcXODZ6iCKaLc=) 2026-01-28 00:24:00.575304 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHO6YykHGIS3u0RaKpivJ8YRpDpsQaA+zgIhIXMBjQxw) 2026-01-28 00:24:00.575322 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDiWOGhz7Zd++B4LThJfUs5NRmlhfFzRz5jL63ouEJ3lwye79XnnkGxlXr7o5yr3Ck1rXujh/Etvw1GFEmQqIvk=) 2026-01-28 00:24:00.575338 | orchestrator | 2026-01-28 00:24:00.575356 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-28 00:24:00.575373 | orchestrator | Wednesday 28 January 2026 00:23:58 +0000 (0:00:01.079) 0:00:10.637 ***** 2026-01-28 00:24:00.575486 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1pTnMCsQDYUQvrbW+jLdVlPqnWmIFiDWJ/iG3NkLlTubFhlgQpo96KhgH6t7gvAa8CuMIvcD89rrVLGkVF9kWAaTZELlcSu9qJQPCnGSNgDtoKpC/2CjyeTN2o/XLFWjpHSnDwL6xzGR3iNvHepK9t7LAqzgYHpvonN3yihCiebH18T/kTgKpWuJdhc7Q0MCahOzvawvcBZOgknCdBiqvMRHNxwoYKRFoIsw7UW0paVNNwtXZkRz8dKXPS9bHxJimmTRPRN8T1aaEnuF8yWk9/9X6a2711FRORgB3+eXKy27WZSQGu7yknQrXzTCUxwfHQ9uLgK3MuTAhn9JvB+DBm65L+gQ6ykYQxfL7N0IWd6Lf1LG51r7IQS7fMUoPwwgqCIURblZbK6NK1o4lPSwbvkit+o66CUQBRg7Ps9cDHCBfnG46cnTfk8EqkiQ8/BIPQTvMLELwDD2jtCOC7r+uMKo4jXuzS9Pzx/ctFeRxAAbK0eLxW/ZYePa6zl0uLrU=) 2026-01-28 00:24:00.575508 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJgpD2Jl1P8uwCh8kIByWWVVDTOj8Y8cp0snRF/8vsWvXuEfXbcTauqJB5amHahbbJACnhSqfAK8Dqe5EZIsUVQ=) 2026-01-28 00:24:00.575527 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDWp5fifrCw6XFAIXAMlSNG4UBRKzmz+jZ5TfaPa6LZH) 2026-01-28 00:24:00.575544 | orchestrator | 2026-01-28 00:24:00.575559 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-28 00:24:00.575574 | orchestrator | Wednesday 28 January 2026 00:23:59 +0000 (0:00:01.044) 0:00:11.681 ***** 2026-01-28 00:24:00.575607 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIEeevlnmj3RvxAnxNRvXcVsZseAjY4b5AwUAWwCe0Fh) 2026-01-28 00:24:11.325712 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPl5Rhp6XV6BjnE1TePRd5Gmu5zRajBxe4BaOJa7F8CodAog532GJurxZ/SPEB0CVGmF5RCJM+cn6MkGzyCUide21/yxfgHUyS3CB8xVb210kLJKiuJTr6DVo38Bnb3IyDgBKvaqkZT6CWoVNuNB7D3TA4zvk65OrAOb155MHZz37aflYc/SZbEL10OocxrQj9jq5l+p3sOSzkcSRvumHPoS6O0yOSc0yR9e0U4xcAw130shGQBPWQ1Ng1f70iXx8bBXyvsggvxfqTTq4KTqi2Lc/iGUCnjPC2/AdLaFdUPWctuTAzD0SB5vg/YTM1E+mrG41BRWnDbBNTXhReTIRkpw1yc7tTukbtjkvnMxtryaBBa1liUvBiOwIUR/tpLX0o2iQaVoCSPHUh4kUA1dOrisYw2LuFzPqa3FyVg0rnFR5g/QCEap3KyekDc9vZMalYfNhbb9jQpeuplaW5wrla+sWgpKkQZCG9j0yzQ3vfFJqvsl2ShrAq4duqfskDljU=) 2026-01-28 00:24:11.325843 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGs3tKZKtpTiA487Nvssq0LmA2rqdw+fHJKBcPRNCbM7T5HMQfYNd5VKvxbrHve10S1/WE0VMW39iIrFmbUUMZE=) 2026-01-28 00:24:11.325872 | orchestrator | 2026-01-28 00:24:11.325894 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-28 00:24:11.325908 | orchestrator | Wednesday 28 January 2026 00:24:00 +0000 (0:00:01.028) 0:00:12.710 ***** 2026-01-28 00:24:11.325921 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfG0bJVbOJUT+ug+aRB7tpXQe+NHE48OsDV24pjoejeuPlGC6MrVOkO3dlrh4sKAew3jIIzTFoyCtjmiy2fYcTLCrAPhm0LBPgKw7vXYe9MWPWs6xX6k/f3GEXmqVEXPOzO8q0X5G3cdMAIY05KJ2c1NhI5HUCiyyzV7fsGQIw0lu7WAtPoywpvTB/NOCDPgG4U7r64zFYGlIVHtmGVew1faAVfr9bsYJZY1YB+l8jbB46wPntAIm9sXhICA8s3ON7anJuG7M3M1Avx72/J6pXtHs1FwKQuyZqsBU7Xna8Gy+BdXvyVf1TP3APj8n1M4NJ0WdzYlu1oNBmvHXbBrmbdp+H0AoJjXI0/eCGRFf+sNTM0bR7HK3RBs+AIPHUd3I6T+6FSyjdXT/GlTr20RPOGLI3ka058bRRD/9ZootxdE0YE65T8hR3SanwOsT1Vr3p3LOIN1mbL2Ic1ewMR3yX0q+V1Jgls6i/09KrSi7tPOYXWegwr17axx+TSwNE7qk=) 2026-01-28 00:24:11.325961 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAAceTn49mTSgMLyTSJMSkeBY6GbfqT18q/Ezxl1TVGdGJGYyfGjHcMmX/52R+4NvaQbaOI1hjH8nwdaIvx/ho8=) 2026-01-28 00:24:11.325974 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHGBtfzWcf+O77KiLyGOuRmU+c89+sh/0GfXqpU4REZh) 2026-01-28 00:24:11.325986 | orchestrator | 2026-01-28 00:24:11.325998 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-01-28 00:24:11.326010 | orchestrator | Wednesday 28 January 2026 00:24:01 +0000 (0:00:01.056) 0:00:13.766 ***** 2026-01-28 00:24:11.326085 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-01-28 00:24:11.326105 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-01-28 00:24:11.326125 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-01-28 00:24:11.326144 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-01-28 00:24:11.326164 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-01-28 00:24:11.326217 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-01-28 00:24:11.326229 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-01-28 00:24:11.326242 | orchestrator | 2026-01-28 00:24:11.326255 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-01-28 00:24:11.326270 | orchestrator | Wednesday 28 January 2026 00:24:06 +0000 (0:00:05.322) 0:00:19.089 ***** 2026-01-28 00:24:11.326283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-01-28 00:24:11.326298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-01-28 00:24:11.326311 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-01-28 00:24:11.326343 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-01-28 00:24:11.326356 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-01-28 00:24:11.326368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-01-28 00:24:11.326380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-01-28 00:24:11.326392 | orchestrator | 2026-01-28 00:24:11.326431 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-28 00:24:11.326453 | orchestrator | Wednesday 28 January 2026 00:24:07 +0000 (0:00:00.180) 0:00:19.270 ***** 2026-01-28 00:24:11.326473 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAaOIsc5+rjQSJEJWSELXcOwEGU6jL6ORL64SDK6Yirh) 2026-01-28 00:24:11.326492 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC71VikiXKU3F9bzMRa1K1khEyidLf4NU5V9EueA6gwDVcL0/U8NNnaj+No51YKgaj2Yk82L6EeQb/VZhip01JDC45wiSEqfV3SanN/uoMEs1AKyaZVvZ2HderghCPAHfERjxOU5UOg96x7gMSJhWFdDbBAfP4Uevt1Dr7dL+7vLwUhrO+PL3dbj3lAZVT1COunO5nHpBww1zzK0JHrMBwAqZOZpDueSDJBcPYZ3OJ6X/1wi2GaPrVqz3mo50rTY+jVXiYZ4VBM4d1lSKBVH7OS655kOigFvoW18KCWyPJTQvQskEvFv+956IcS+Y81Td2NCdeD6Ref3PjmvWo7GnpSPAkJU57l9c6s5ozxG8oNtLEP5VJgQHezEbkUfQomawusgF96TltzMspTGv7PRgf7a6xl9wE2ZOiNWJM6/garUuulC3DldhTCI7tny92+HKy9q2zg3RrsVqliJekrhMwaSLwfRaJXJOvzLM+a8mgk3bjEH8q1wQ3s6YVvEBNPkcs=) 2026-01-28 00:24:11.326535 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB060Jo9FFbQ4TjhBg2b076CTzzHQsj+xVrVwOPwl/aXLYeuVQdJ7jxoZXHlv6wtiNiOpC9HScXm48aMaAgRSPw=) 2026-01-28 00:24:11.326548 | orchestrator | 2026-01-28 00:24:11.326561 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-28 00:24:11.326578 | orchestrator | Wednesday 28 January 2026 00:24:08 +0000 (0:00:01.066) 0:00:20.337 ***** 2026-01-28 00:24:11.326592 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEJ25kGBtN9l7jSv3/DFrZiQftmDO9NYMlZh7EPLqpyW) 2026-01-28 00:24:11.326605 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCil6Toj5jhYCxQPjdcP3oMaORZzeHqq7FSSF+J6l2YV+PXlB92gwZzykpO/nslk2AHlFLUYTeCe30223+wvCW4vfVs2232fpEC0IciyuZhOzvjJdcEUrTwotbtRDuUfdnZk2TdM+SD6prrV7cgOEtL5+7GRrojV7fMwZEIK1xLgf6D33IWqRIAaObdXstm1f97CVBXyxLLRuMgO7bXMYOGB2ZaV1BoAJs51HouInAn2QvaeQ0Zkh91i7pIM+AAhnPeEijrO+jrESIwS72e/MDjQQSEv1U/0NS9nk9xPMKsVWs47Ix+ZB7EknCk6zFmDT+sGlN32+3Qait2Y+bozb7p3VxbsgeKLeaqUnjjB48Qe/QPq3HYK/ktHtbQfrs/GQc3kWVOWrMxexnFKUiEvpnDkNWLraX5DgsP0J99U4TUTPjdJsBn3FgxTV/3ucbw0ahqfFGNnP40v3BRHU4ran+fCaaJ7cuxOZBEQBNX9ddSTc0XUneZe5mRXgLP5nIdIRM=) 2026-01-28 00:24:11.326618 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDDbKI2QhrQnjDDaEGLNcrBlvFeS23ItsJIqUHiOwD+djd9iYTpyNWsXqZzQgnHIFtMRuZsrLO+6udZ1oPvAioA=) 2026-01-28 00:24:11.326637 | orchestrator | 2026-01-28 00:24:11.326654 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-28 00:24:11.326673 | orchestrator | Wednesday 28 January 2026 00:24:09 +0000 (0:00:01.036) 0:00:21.373 ***** 2026-01-28 00:24:11.326691 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJAjbRkBPkyNHlQHRxScsrybTCHLOeQRuGU0yCm52D2n) 2026-01-28 00:24:11.326711 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0cK5EW5TqAQCYQXE6NhLV9CCjDnWicb0IwDPP9awSjTI20tyhWpiTwLjz4fe5Ackdq0ywlG1mhEjAR6dC/yDbVWGaWMQhBduzepLAzAUNOTujFq5gWA06qNMDBr8rFmsXpDHpRwdum2K0y34VxJ2AveDJfs2lh+UWDvqhIA/MDtysi1zN7zo/W9wEZI3RF/kf6WBo82c+JbjV+w/TyyKqB3hn2S52Tpz+9bMlXNwAcTAYjcdGJXUHQ7f/Z+A7UPHMcj8yR0Kbjzy6S5iYeKbu8JSbrejeWSKFaGr1A2cyCpZ0LqSUs7swdNPKnvfRNuwDel6BaLP+VjPZL5Oy+gXdlrssFTfjwJPiGqiD4Q6JIaKQFjaPjuXOcaYb947LmP23BCsW+xMs0cnaHDhspquH+Yk5ArBm2nhoj81rF6PjfSq6IATEPuLjjurYl0LSBFHh6gjobZxy66+Cl+nCOYxwnsNwq4Z6wiChCIrq6BPr2mxoPHLCGkJ5GEIkORb43Zs=) 2026-01-28 00:24:11.326730 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHO6Ofeck2C9pfjmMROnY3iAe/TGWKPAjY6yzyKeoIob5ZOSVaJyF2L4UwSInkZh72Cbqi/5sLDT57B/uVYnlPs=) 2026-01-28 00:24:11.326748 | orchestrator | 2026-01-28 00:24:11.326759 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-28 00:24:11.326770 | orchestrator | Wednesday 28 January 2026 00:24:10 +0000 (0:00:01.040) 0:00:22.413 ***** 2026-01-28 00:24:11.326793 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZO8QbVnKvMihJufEZupf45KqnsPlCfBoGgJ7rJMbNX9vPDkSs389L3gKSlkLcvmI6M//BndmwJz07Tw7OlEHg01YLswbQO0NTP4QSknmOjBKVH9qsHykld8LHbYs+hKEA1wn8CIhZMJ+EDtnltv3ACbNOGJA5ID3Zv027XJd3RqNGP+HfePFZPU1z1N3pDoI0rnYnVwKA2AWU7W+8wv0URYGCy8oydlnnE4g5e7xvurxBy6dpSUGuaqXAjkTMqNzKpTbYGxAV4dlcb2Dnbq0WC3n7i17EO1Px4b32lRZyfDNtMe/g6Hi/k/U9hZ1KRdkb0m9QGHD9MIU8VkBlePsohXWvDwbS/TEQ6Kdc+XqF+icTtvsE0GZyUDVFShYg7SdEpNaqIu4SHDdCsAWDv4/9SJX2wiy+AlLp3MH+1gYIZ1Kx2nRR9aL/dyABjQIRFgatD0a84KfJyPJxJdW8o+U27icNbq2blX15Kp0d+9wjpZLXOSuAURJcXODZ6iCKaLc=) 2026-01-28 00:24:15.698591 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDiWOGhz7Zd++B4LThJfUs5NRmlhfFzRz5jL63ouEJ3lwye79XnnkGxlXr7o5yr3Ck1rXujh/Etvw1GFEmQqIvk=) 2026-01-28 00:24:15.698725 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHO6YykHGIS3u0RaKpivJ8YRpDpsQaA+zgIhIXMBjQxw) 2026-01-28 00:24:15.698743 | orchestrator | 2026-01-28 00:24:15.698756 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-28 00:24:15.698769 | orchestrator | Wednesday 28 January 2026 00:24:11 +0000 (0:00:01.049) 0:00:23.463 ***** 2026-01-28 00:24:15.698780 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJgpD2Jl1P8uwCh8kIByWWVVDTOj8Y8cp0snRF/8vsWvXuEfXbcTauqJB5amHahbbJACnhSqfAK8Dqe5EZIsUVQ=) 2026-01-28 00:24:15.698795 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1pTnMCsQDYUQvrbW+jLdVlPqnWmIFiDWJ/iG3NkLlTubFhlgQpo96KhgH6t7gvAa8CuMIvcD89rrVLGkVF9kWAaTZELlcSu9qJQPCnGSNgDtoKpC/2CjyeTN2o/XLFWjpHSnDwL6xzGR3iNvHepK9t7LAqzgYHpvonN3yihCiebH18T/kTgKpWuJdhc7Q0MCahOzvawvcBZOgknCdBiqvMRHNxwoYKRFoIsw7UW0paVNNwtXZkRz8dKXPS9bHxJimmTRPRN8T1aaEnuF8yWk9/9X6a2711FRORgB3+eXKy27WZSQGu7yknQrXzTCUxwfHQ9uLgK3MuTAhn9JvB+DBm65L+gQ6ykYQxfL7N0IWd6Lf1LG51r7IQS7fMUoPwwgqCIURblZbK6NK1o4lPSwbvkit+o66CUQBRg7Ps9cDHCBfnG46cnTfk8EqkiQ8/BIPQTvMLELwDD2jtCOC7r+uMKo4jXuzS9Pzx/ctFeRxAAbK0eLxW/ZYePa6zl0uLrU=) 2026-01-28 00:24:15.698809 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDWp5fifrCw6XFAIXAMlSNG4UBRKzmz+jZ5TfaPa6LZH) 2026-01-28 00:24:15.698820 | orchestrator | 2026-01-28 00:24:15.698831 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-28 00:24:15.698843 | orchestrator | Wednesday 28 January 2026 00:24:12 +0000 (0:00:01.036) 0:00:24.499 ***** 2026-01-28 00:24:15.698854 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGs3tKZKtpTiA487Nvssq0LmA2rqdw+fHJKBcPRNCbM7T5HMQfYNd5VKvxbrHve10S1/WE0VMW39iIrFmbUUMZE=) 2026-01-28 00:24:15.698866 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPl5Rhp6XV6BjnE1TePRd5Gmu5zRajBxe4BaOJa7F8CodAog532GJurxZ/SPEB0CVGmF5RCJM+cn6MkGzyCUide21/yxfgHUyS3CB8xVb210kLJKiuJTr6DVo38Bnb3IyDgBKvaqkZT6CWoVNuNB7D3TA4zvk65OrAOb155MHZz37aflYc/SZbEL10OocxrQj9jq5l+p3sOSzkcSRvumHPoS6O0yOSc0yR9e0U4xcAw130shGQBPWQ1Ng1f70iXx8bBXyvsggvxfqTTq4KTqi2Lc/iGUCnjPC2/AdLaFdUPWctuTAzD0SB5vg/YTM1E+mrG41BRWnDbBNTXhReTIRkpw1yc7tTukbtjkvnMxtryaBBa1liUvBiOwIUR/tpLX0o2iQaVoCSPHUh4kUA1dOrisYw2LuFzPqa3FyVg0rnFR5g/QCEap3KyekDc9vZMalYfNhbb9jQpeuplaW5wrla+sWgpKkQZCG9j0yzQ3vfFJqvsl2ShrAq4duqfskDljU=) 2026-01-28 00:24:15.698877 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIEeevlnmj3RvxAnxNRvXcVsZseAjY4b5AwUAWwCe0Fh) 2026-01-28 00:24:15.698889 | orchestrator | 2026-01-28 00:24:15.698900 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-01-28 00:24:15.698911 | orchestrator | Wednesday 28 January 2026 00:24:13 +0000 (0:00:01.045) 0:00:25.544 ***** 2026-01-28 00:24:15.698922 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfG0bJVbOJUT+ug+aRB7tpXQe+NHE48OsDV24pjoejeuPlGC6MrVOkO3dlrh4sKAew3jIIzTFoyCtjmiy2fYcTLCrAPhm0LBPgKw7vXYe9MWPWs6xX6k/f3GEXmqVEXPOzO8q0X5G3cdMAIY05KJ2c1NhI5HUCiyyzV7fsGQIw0lu7WAtPoywpvTB/NOCDPgG4U7r64zFYGlIVHtmGVew1faAVfr9bsYJZY1YB+l8jbB46wPntAIm9sXhICA8s3ON7anJuG7M3M1Avx72/J6pXtHs1FwKQuyZqsBU7Xna8Gy+BdXvyVf1TP3APj8n1M4NJ0WdzYlu1oNBmvHXbBrmbdp+H0AoJjXI0/eCGRFf+sNTM0bR7HK3RBs+AIPHUd3I6T+6FSyjdXT/GlTr20RPOGLI3ka058bRRD/9ZootxdE0YE65T8hR3SanwOsT1Vr3p3LOIN1mbL2Ic1ewMR3yX0q+V1Jgls6i/09KrSi7tPOYXWegwr17axx+TSwNE7qk=) 2026-01-28 00:24:15.698951 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAAceTn49mTSgMLyTSJMSkeBY6GbfqT18q/Ezxl1TVGdGJGYyfGjHcMmX/52R+4NvaQbaOI1hjH8nwdaIvx/ho8=) 2026-01-28 00:24:15.698963 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHGBtfzWcf+O77KiLyGOuRmU+c89+sh/0GfXqpU4REZh) 2026-01-28 00:24:15.698985 | orchestrator | 2026-01-28 00:24:15.698996 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-01-28 00:24:15.699006 | orchestrator | Wednesday 28 January 2026 00:24:14 +0000 (0:00:01.090) 0:00:26.635 ***** 2026-01-28 00:24:15.699018 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-28 00:24:15.699029 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-28 00:24:15.699056 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-28 00:24:15.699068 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-28 00:24:15.699080 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-28 00:24:15.699094 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-28 00:24:15.699106 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-28 00:24:15.699119 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:24:15.699134 | orchestrator | 2026-01-28 00:24:15.699147 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-01-28 00:24:15.699159 | orchestrator | Wednesday 28 January 2026 00:24:14 +0000 (0:00:00.180) 0:00:26.815 ***** 2026-01-28 00:24:15.699233 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:24:15.699248 | orchestrator | 2026-01-28 00:24:15.699261 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-01-28 00:24:15.699273 | orchestrator | Wednesday 28 January 2026 00:24:14 +0000 (0:00:00.056) 0:00:26.872 ***** 2026-01-28 00:24:15.699285 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:24:15.699298 | orchestrator | 2026-01-28 00:24:15.699309 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-01-28 00:24:15.699322 | orchestrator | Wednesday 28 January 2026 00:24:14 +0000 (0:00:00.070) 0:00:26.942 ***** 2026-01-28 00:24:15.699334 | orchestrator | changed: [testbed-manager] 2026-01-28 00:24:15.699346 | orchestrator | 2026-01-28 00:24:15.699358 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:24:15.699377 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-28 00:24:15.699391 | orchestrator | 2026-01-28 00:24:15.699402 | orchestrator | 2026-01-28 00:24:15.699414 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:24:15.699427 | orchestrator | Wednesday 28 January 2026 00:24:15 +0000 (0:00:00.696) 0:00:27.639 ***** 2026-01-28 00:24:15.699439 | orchestrator | =============================================================================== 2026-01-28 00:24:15.699449 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.88s 2026-01-28 00:24:15.699460 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.32s 2026-01-28 00:24:15.699472 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-01-28 00:24:15.699483 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-01-28 00:24:15.699494 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-01-28 00:24:15.699504 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-28 00:24:15.699515 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-28 00:24:15.699526 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-01-28 00:24:15.699537 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-01-28 00:24:15.699547 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-01-28 00:24:15.699558 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-01-28 00:24:15.699569 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-01-28 00:24:15.699588 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-01-28 00:24:15.699600 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-01-28 00:24:15.699611 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-01-28 00:24:15.699622 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-01-28 00:24:15.699633 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.70s 2026-01-28 00:24:15.699644 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-01-28 00:24:15.699655 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-01-28 00:24:15.699673 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-01-28 00:24:15.994427 | orchestrator | + osism apply squid 2026-01-28 00:24:28.021465 | orchestrator | 2026-01-28 00:24:28 | INFO  | Task ea42af32-b4fe-48af-be02-8531fd755193 (squid) was prepared for execution. 2026-01-28 00:24:28.021576 | orchestrator | 2026-01-28 00:24:28 | INFO  | It takes a moment until task ea42af32-b4fe-48af-be02-8531fd755193 (squid) has been started and output is visible here. 2026-01-28 00:26:24.877939 | orchestrator | 2026-01-28 00:26:24.878068 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-01-28 00:26:24.878080 | orchestrator | 2026-01-28 00:26:24.878087 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-01-28 00:26:24.878095 | orchestrator | Wednesday 28 January 2026 00:24:31 +0000 (0:00:00.142) 0:00:00.142 ***** 2026-01-28 00:26:24.878102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-01-28 00:26:24.878110 | orchestrator | 2026-01-28 00:26:24.878117 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-01-28 00:26:24.878127 | orchestrator | Wednesday 28 January 2026 00:24:31 +0000 (0:00:00.075) 0:00:00.217 ***** 2026-01-28 00:26:24.878136 | orchestrator | ok: [testbed-manager] 2026-01-28 00:26:24.878166 | orchestrator | 2026-01-28 00:26:24.878174 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-01-28 00:26:24.878235 | orchestrator | Wednesday 28 January 2026 00:24:32 +0000 (0:00:01.185) 0:00:01.402 ***** 2026-01-28 00:26:24.878243 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-01-28 00:26:24.878249 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-01-28 00:26:24.878256 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-01-28 00:26:24.878263 | orchestrator | 2026-01-28 00:26:24.878269 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-01-28 00:26:24.878276 | orchestrator | Wednesday 28 January 2026 00:24:34 +0000 (0:00:01.057) 0:00:02.460 ***** 2026-01-28 00:26:24.878282 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-01-28 00:26:24.878288 | orchestrator | 2026-01-28 00:26:24.878295 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-01-28 00:26:24.878301 | orchestrator | Wednesday 28 January 2026 00:24:35 +0000 (0:00:01.066) 0:00:03.526 ***** 2026-01-28 00:26:24.878307 | orchestrator | ok: [testbed-manager] 2026-01-28 00:26:24.878314 | orchestrator | 2026-01-28 00:26:24.878320 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-01-28 00:26:24.878326 | orchestrator | Wednesday 28 January 2026 00:24:35 +0000 (0:00:00.353) 0:00:03.880 ***** 2026-01-28 00:26:24.878333 | orchestrator | changed: [testbed-manager] 2026-01-28 00:26:24.878339 | orchestrator | 2026-01-28 00:26:24.878349 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-01-28 00:26:24.878355 | orchestrator | Wednesday 28 January 2026 00:24:36 +0000 (0:00:00.893) 0:00:04.774 ***** 2026-01-28 00:26:24.878362 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-01-28 00:26:24.878391 | orchestrator | ok: [testbed-manager] 2026-01-28 00:26:24.878399 | orchestrator | 2026-01-28 00:26:24.878405 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-01-28 00:26:24.878411 | orchestrator | Wednesday 28 January 2026 00:25:11 +0000 (0:00:35.528) 0:00:40.302 ***** 2026-01-28 00:26:24.878418 | orchestrator | changed: [testbed-manager] 2026-01-28 00:26:24.878424 | orchestrator | 2026-01-28 00:26:24.878430 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-01-28 00:26:24.878436 | orchestrator | Wednesday 28 January 2026 00:25:23 +0000 (0:00:11.988) 0:00:52.291 ***** 2026-01-28 00:26:24.878443 | orchestrator | Pausing for 60 seconds 2026-01-28 00:26:24.878450 | orchestrator | changed: [testbed-manager] 2026-01-28 00:26:24.878456 | orchestrator | 2026-01-28 00:26:24.878465 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-01-28 00:26:24.878475 | orchestrator | Wednesday 28 January 2026 00:26:23 +0000 (0:01:00.079) 0:01:52.370 ***** 2026-01-28 00:26:24.878483 | orchestrator | ok: [testbed-manager] 2026-01-28 00:26:24.878493 | orchestrator | 2026-01-28 00:26:24.878501 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-01-28 00:26:24.878508 | orchestrator | Wednesday 28 January 2026 00:26:24 +0000 (0:00:00.064) 0:01:52.435 ***** 2026-01-28 00:26:24.878515 | orchestrator | changed: [testbed-manager] 2026-01-28 00:26:24.878522 | orchestrator | 2026-01-28 00:26:24.878533 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:26:24.878542 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:26:24.878549 | orchestrator | 2026-01-28 00:26:24.878556 | orchestrator | 2026-01-28 00:26:24.878564 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:26:24.878571 | orchestrator | Wednesday 28 January 2026 00:26:24 +0000 (0:00:00.591) 0:01:53.026 ***** 2026-01-28 00:26:24.878578 | orchestrator | =============================================================================== 2026-01-28 00:26:24.878585 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-01-28 00:26:24.878592 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 35.53s 2026-01-28 00:26:24.878599 | orchestrator | osism.services.squid : Restart squid service --------------------------- 11.99s 2026-01-28 00:26:24.878620 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.19s 2026-01-28 00:26:24.878628 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.07s 2026-01-28 00:26:24.878635 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.06s 2026-01-28 00:26:24.878642 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.89s 2026-01-28 00:26:24.878649 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2026-01-28 00:26:24.878657 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2026-01-28 00:26:24.878664 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-01-28 00:26:24.878671 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-01-28 00:26:25.192235 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-01-28 00:26:25.193261 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-01-28 00:26:25.255659 | orchestrator | + [[ -1 -ge 0 ]] 2026-01-28 00:26:25.255759 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-01-28 00:26:25.263653 | orchestrator | + set -e 2026-01-28 00:26:25.264210 | orchestrator | + NAMESPACE=kolla/release 2026-01-28 00:26:25.264238 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-01-28 00:26:25.270876 | orchestrator | ++ semver 9.5.0 9.0.0 2026-01-28 00:26:25.332959 | orchestrator | + [[ 1 -lt 0 ]] 2026-01-28 00:26:25.333479 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-01-28 00:26:37.472483 | orchestrator | 2026-01-28 00:26:37 | INFO  | Task 3ca21bc1-bc92-4fea-a188-49a9f2792769 (operator) was prepared for execution. 2026-01-28 00:26:37.472618 | orchestrator | 2026-01-28 00:26:37 | INFO  | It takes a moment until task 3ca21bc1-bc92-4fea-a188-49a9f2792769 (operator) has been started and output is visible here. 2026-01-28 00:26:54.346594 | orchestrator | 2026-01-28 00:26:54.346700 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-01-28 00:26:54.346718 | orchestrator | 2026-01-28 00:26:54.346730 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-28 00:26:54.346760 | orchestrator | Wednesday 28 January 2026 00:26:41 +0000 (0:00:00.142) 0:00:00.142 ***** 2026-01-28 00:26:54.346783 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:26:54.346797 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:26:54.346808 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:26:54.346820 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:26:54.346831 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:26:54.346842 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:26:54.346853 | orchestrator | 2026-01-28 00:26:54.346864 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-01-28 00:26:54.346875 | orchestrator | Wednesday 28 January 2026 00:26:45 +0000 (0:00:04.164) 0:00:04.307 ***** 2026-01-28 00:26:54.346886 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:26:54.346897 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:26:54.346908 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:26:54.346919 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:26:54.346930 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:26:54.346941 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:26:54.346952 | orchestrator | 2026-01-28 00:26:54.346963 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-01-28 00:26:54.346973 | orchestrator | 2026-01-28 00:26:54.346984 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-01-28 00:26:54.346995 | orchestrator | Wednesday 28 January 2026 00:26:46 +0000 (0:00:00.720) 0:00:05.027 ***** 2026-01-28 00:26:54.347006 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:26:54.347017 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:26:54.347029 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:26:54.347058 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:26:54.347070 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:26:54.347081 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:26:54.347092 | orchestrator | 2026-01-28 00:26:54.347103 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-01-28 00:26:54.347114 | orchestrator | Wednesday 28 January 2026 00:26:46 +0000 (0:00:00.198) 0:00:05.225 ***** 2026-01-28 00:26:54.347127 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:26:54.347140 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:26:54.347152 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:26:54.347165 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:26:54.347207 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:26:54.347219 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:26:54.347230 | orchestrator | 2026-01-28 00:26:54.347241 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-01-28 00:26:54.347252 | orchestrator | Wednesday 28 January 2026 00:26:46 +0000 (0:00:00.162) 0:00:05.388 ***** 2026-01-28 00:26:54.347263 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:26:54.347276 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:26:54.347287 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:26:54.347298 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:26:54.347308 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:26:54.347319 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:26:54.347330 | orchestrator | 2026-01-28 00:26:54.347341 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-01-28 00:26:54.347352 | orchestrator | Wednesday 28 January 2026 00:26:47 +0000 (0:00:00.621) 0:00:06.009 ***** 2026-01-28 00:26:54.347363 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:26:54.347374 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:26:54.347385 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:26:54.347423 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:26:54.347435 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:26:54.347446 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:26:54.347457 | orchestrator | 2026-01-28 00:26:54.347468 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-01-28 00:26:54.347478 | orchestrator | Wednesday 28 January 2026 00:26:48 +0000 (0:00:00.841) 0:00:06.851 ***** 2026-01-28 00:26:54.347489 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-01-28 00:26:54.347500 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-01-28 00:26:54.347511 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-01-28 00:26:54.347522 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-01-28 00:26:54.347532 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-01-28 00:26:54.347543 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-01-28 00:26:54.347554 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-01-28 00:26:54.347565 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-01-28 00:26:54.347576 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-01-28 00:26:54.347586 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-01-28 00:26:54.347597 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-01-28 00:26:54.347608 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-01-28 00:26:54.347618 | orchestrator | 2026-01-28 00:26:54.347629 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-01-28 00:26:54.347640 | orchestrator | Wednesday 28 January 2026 00:26:49 +0000 (0:00:01.095) 0:00:07.946 ***** 2026-01-28 00:26:54.347651 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:26:54.347662 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:26:54.347673 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:26:54.347684 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:26:54.347695 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:26:54.347706 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:26:54.347716 | orchestrator | 2026-01-28 00:26:54.347727 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-01-28 00:26:54.347739 | orchestrator | Wednesday 28 January 2026 00:26:50 +0000 (0:00:01.324) 0:00:09.271 ***** 2026-01-28 00:26:54.347750 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-01-28 00:26:54.347761 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-01-28 00:26:54.347772 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-01-28 00:26:54.347783 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-01-28 00:26:54.347812 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-01-28 00:26:54.347824 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-01-28 00:26:54.347835 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-01-28 00:26:54.347846 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-01-28 00:26:54.347856 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-01-28 00:26:54.347867 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-01-28 00:26:54.347878 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-01-28 00:26:54.347888 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-01-28 00:26:54.347899 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-01-28 00:26:54.347910 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-01-28 00:26:54.347921 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-01-28 00:26:54.347932 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-01-28 00:26:54.347943 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-01-28 00:26:54.347954 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-01-28 00:26:54.347973 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-01-28 00:26:54.347984 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-01-28 00:26:54.347995 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-01-28 00:26:54.348006 | orchestrator | 2026-01-28 00:26:54.348017 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-01-28 00:26:54.348029 | orchestrator | Wednesday 28 January 2026 00:26:52 +0000 (0:00:01.375) 0:00:10.646 ***** 2026-01-28 00:26:54.348040 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:26:54.348051 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:26:54.348062 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:26:54.348073 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:26:54.348084 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:26:54.348094 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:26:54.348105 | orchestrator | 2026-01-28 00:26:54.348116 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-01-28 00:26:54.348127 | orchestrator | Wednesday 28 January 2026 00:26:52 +0000 (0:00:00.158) 0:00:10.805 ***** 2026-01-28 00:26:54.348138 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:26:54.348149 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:26:54.348160 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:26:54.348171 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:26:54.348211 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:26:54.348230 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:26:54.348241 | orchestrator | 2026-01-28 00:26:54.348252 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-01-28 00:26:54.348263 | orchestrator | Wednesday 28 January 2026 00:26:52 +0000 (0:00:00.200) 0:00:11.006 ***** 2026-01-28 00:26:54.348273 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:26:54.348284 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:26:54.348295 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:26:54.348305 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:26:54.348316 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:26:54.348327 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:26:54.348337 | orchestrator | 2026-01-28 00:26:54.348348 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-01-28 00:26:54.348359 | orchestrator | Wednesday 28 January 2026 00:26:53 +0000 (0:00:00.649) 0:00:11.656 ***** 2026-01-28 00:26:54.348370 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:26:54.348380 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:26:54.348391 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:26:54.348402 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:26:54.348412 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:26:54.348423 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:26:54.348433 | orchestrator | 2026-01-28 00:26:54.348444 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-01-28 00:26:54.348455 | orchestrator | Wednesday 28 January 2026 00:26:53 +0000 (0:00:00.164) 0:00:11.820 ***** 2026-01-28 00:26:54.348466 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-28 00:26:54.348486 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:26:54.348498 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-28 00:26:54.348508 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-28 00:26:54.348519 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:26:54.348530 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:26:54.348541 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-28 00:26:54.348552 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:26:54.348562 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-28 00:26:54.348573 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:26:54.348584 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-28 00:26:54.348595 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:26:54.348613 | orchestrator | 2026-01-28 00:26:54.348624 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-01-28 00:26:54.348634 | orchestrator | Wednesday 28 January 2026 00:26:54 +0000 (0:00:00.842) 0:00:12.663 ***** 2026-01-28 00:26:54.348645 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:26:54.348656 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:26:54.348666 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:26:54.348677 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:26:54.348688 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:26:54.348699 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:26:54.348709 | orchestrator | 2026-01-28 00:26:54.348720 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-01-28 00:26:54.348731 | orchestrator | Wednesday 28 January 2026 00:26:54 +0000 (0:00:00.164) 0:00:12.827 ***** 2026-01-28 00:26:54.348742 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:26:54.348753 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:26:54.348763 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:26:54.348774 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:26:54.348793 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:26:55.643285 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:26:55.643381 | orchestrator | 2026-01-28 00:26:55.643396 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-01-28 00:26:55.643407 | orchestrator | Wednesday 28 January 2026 00:26:54 +0000 (0:00:00.138) 0:00:12.966 ***** 2026-01-28 00:26:55.643416 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:26:55.643427 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:26:55.643436 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:26:55.643445 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:26:55.643453 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:26:55.643462 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:26:55.643470 | orchestrator | 2026-01-28 00:26:55.643479 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-01-28 00:26:55.643488 | orchestrator | Wednesday 28 January 2026 00:26:54 +0000 (0:00:00.137) 0:00:13.103 ***** 2026-01-28 00:26:55.643496 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:26:55.643505 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:26:55.643514 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:26:55.643522 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:26:55.643531 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:26:55.643539 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:26:55.643547 | orchestrator | 2026-01-28 00:26:55.643556 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-01-28 00:26:55.643565 | orchestrator | Wednesday 28 January 2026 00:26:55 +0000 (0:00:00.703) 0:00:13.807 ***** 2026-01-28 00:26:55.643574 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:26:55.643582 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:26:55.643591 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:26:55.643618 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:26:55.643627 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:26:55.643636 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:26:55.643644 | orchestrator | 2026-01-28 00:26:55.643653 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:26:55.643662 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-28 00:26:55.643673 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-28 00:26:55.643681 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-28 00:26:55.643690 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-28 00:26:55.643717 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-28 00:26:55.643727 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-28 00:26:55.643735 | orchestrator | 2026-01-28 00:26:55.643744 | orchestrator | 2026-01-28 00:26:55.643753 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:26:55.643761 | orchestrator | Wednesday 28 January 2026 00:26:55 +0000 (0:00:00.229) 0:00:14.037 ***** 2026-01-28 00:26:55.643770 | orchestrator | =============================================================================== 2026-01-28 00:26:55.643778 | orchestrator | Gathering Facts --------------------------------------------------------- 4.16s 2026-01-28 00:26:55.643787 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.38s 2026-01-28 00:26:55.643797 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.32s 2026-01-28 00:26:55.643805 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.10s 2026-01-28 00:26:55.643815 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.84s 2026-01-28 00:26:55.643825 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.84s 2026-01-28 00:26:55.643835 | orchestrator | Do not require tty for all users ---------------------------------------- 0.72s 2026-01-28 00:26:55.643844 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.70s 2026-01-28 00:26:55.643854 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.65s 2026-01-28 00:26:55.643864 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.62s 2026-01-28 00:26:55.643874 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2026-01-28 00:26:55.643884 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.20s 2026-01-28 00:26:55.643893 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2026-01-28 00:26:55.643903 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2026-01-28 00:26:55.643913 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2026-01-28 00:26:55.643922 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2026-01-28 00:26:55.643932 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2026-01-28 00:26:55.643942 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2026-01-28 00:26:55.643952 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2026-01-28 00:26:55.929020 | orchestrator | + osism apply --environment custom facts 2026-01-28 00:26:57.858230 | orchestrator | 2026-01-28 00:26:57 | INFO  | Trying to run play facts in environment custom 2026-01-28 00:27:07.929358 | orchestrator | 2026-01-28 00:27:07 | INFO  | Task 9d00f482-dfbf-453a-8d7a-aa4a8184a933 (facts) was prepared for execution. 2026-01-28 00:27:07.929476 | orchestrator | 2026-01-28 00:27:07 | INFO  | It takes a moment until task 9d00f482-dfbf-453a-8d7a-aa4a8184a933 (facts) has been started and output is visible here. 2026-01-28 00:27:54.818841 | orchestrator | 2026-01-28 00:27:54.818957 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-01-28 00:27:54.818973 | orchestrator | 2026-01-28 00:27:54.818984 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-28 00:27:54.818996 | orchestrator | Wednesday 28 January 2026 00:27:11 +0000 (0:00:00.084) 0:00:00.084 ***** 2026-01-28 00:27:54.819007 | orchestrator | ok: [testbed-manager] 2026-01-28 00:27:54.819021 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:27:54.819033 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:27:54.819075 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:27:54.819086 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:27:54.819098 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:27:54.819108 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:27:54.819119 | orchestrator | 2026-01-28 00:27:54.819130 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-01-28 00:27:54.819141 | orchestrator | Wednesday 28 January 2026 00:27:13 +0000 (0:00:01.418) 0:00:01.503 ***** 2026-01-28 00:27:54.819152 | orchestrator | ok: [testbed-manager] 2026-01-28 00:27:54.819163 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:27:54.819198 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:27:54.819209 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:27:54.819220 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:27:54.819231 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:27:54.819241 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:27:54.819252 | orchestrator | 2026-01-28 00:27:54.819263 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-01-28 00:27:54.819274 | orchestrator | 2026-01-28 00:27:54.819284 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-28 00:27:54.819295 | orchestrator | Wednesday 28 January 2026 00:27:14 +0000 (0:00:01.353) 0:00:02.856 ***** 2026-01-28 00:27:54.819306 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:27:54.819317 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:27:54.819327 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:27:54.819338 | orchestrator | 2026-01-28 00:27:54.819349 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-28 00:27:54.819361 | orchestrator | Wednesday 28 January 2026 00:27:14 +0000 (0:00:00.096) 0:00:02.953 ***** 2026-01-28 00:27:54.819372 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:27:54.819385 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:27:54.819396 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:27:54.819409 | orchestrator | 2026-01-28 00:27:54.819422 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-28 00:27:54.819434 | orchestrator | Wednesday 28 January 2026 00:27:15 +0000 (0:00:00.211) 0:00:03.164 ***** 2026-01-28 00:27:54.819446 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:27:54.819458 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:27:54.819471 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:27:54.819483 | orchestrator | 2026-01-28 00:27:54.819495 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-28 00:27:54.819507 | orchestrator | Wednesday 28 January 2026 00:27:15 +0000 (0:00:00.228) 0:00:03.393 ***** 2026-01-28 00:27:54.819520 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:27:54.819533 | orchestrator | 2026-01-28 00:27:54.819546 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-28 00:27:54.819558 | orchestrator | Wednesday 28 January 2026 00:27:15 +0000 (0:00:00.141) 0:00:03.534 ***** 2026-01-28 00:27:54.819570 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:27:54.819582 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:27:54.819595 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:27:54.819608 | orchestrator | 2026-01-28 00:27:54.819620 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-28 00:27:54.819632 | orchestrator | Wednesday 28 January 2026 00:27:15 +0000 (0:00:00.541) 0:00:04.076 ***** 2026-01-28 00:27:54.819660 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:27:54.819682 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:27:54.819695 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:27:54.819707 | orchestrator | 2026-01-28 00:27:54.819720 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-28 00:27:54.819733 | orchestrator | Wednesday 28 January 2026 00:27:16 +0000 (0:00:00.170) 0:00:04.246 ***** 2026-01-28 00:27:54.819744 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:27:54.819755 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:27:54.819774 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:27:54.819785 | orchestrator | 2026-01-28 00:27:54.819796 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-28 00:27:54.819807 | orchestrator | Wednesday 28 January 2026 00:27:17 +0000 (0:00:01.136) 0:00:05.382 ***** 2026-01-28 00:27:54.819817 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:27:54.819828 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:27:54.819839 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:27:54.819849 | orchestrator | 2026-01-28 00:27:54.819860 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-28 00:27:54.819871 | orchestrator | Wednesday 28 January 2026 00:27:17 +0000 (0:00:00.515) 0:00:05.898 ***** 2026-01-28 00:27:54.819881 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:27:54.819892 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:27:54.819903 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:27:54.819914 | orchestrator | 2026-01-28 00:27:54.819924 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-28 00:27:54.819983 | orchestrator | Wednesday 28 January 2026 00:27:18 +0000 (0:00:01.067) 0:00:06.965 ***** 2026-01-28 00:27:54.819996 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:27:54.820007 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:27:54.820017 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:27:54.820028 | orchestrator | 2026-01-28 00:27:54.820039 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-01-28 00:27:54.820050 | orchestrator | Wednesday 28 January 2026 00:27:36 +0000 (0:00:17.588) 0:00:24.554 ***** 2026-01-28 00:27:54.820060 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:27:54.820071 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:27:54.820082 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:27:54.820093 | orchestrator | 2026-01-28 00:27:54.820104 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-01-28 00:27:54.820133 | orchestrator | Wednesday 28 January 2026 00:27:36 +0000 (0:00:00.100) 0:00:24.654 ***** 2026-01-28 00:27:54.820145 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:27:54.820156 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:27:54.820166 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:27:54.820214 | orchestrator | 2026-01-28 00:27:54.820226 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-01-28 00:27:54.820237 | orchestrator | Wednesday 28 January 2026 00:27:45 +0000 (0:00:08.514) 0:00:33.168 ***** 2026-01-28 00:27:54.820248 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:27:54.820259 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:27:54.820270 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:27:54.820281 | orchestrator | 2026-01-28 00:27:54.820291 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-01-28 00:27:54.820302 | orchestrator | Wednesday 28 January 2026 00:27:45 +0000 (0:00:00.452) 0:00:33.620 ***** 2026-01-28 00:27:54.820313 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-01-28 00:27:54.820329 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-01-28 00:27:54.820340 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-01-28 00:27:54.820351 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-01-28 00:27:54.820362 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-01-28 00:27:54.820372 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-01-28 00:27:54.820383 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-01-28 00:27:54.820394 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-01-28 00:27:54.820404 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-01-28 00:27:54.820415 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-01-28 00:27:54.820426 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-01-28 00:27:54.820444 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-01-28 00:27:54.820455 | orchestrator | 2026-01-28 00:27:54.820466 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-28 00:27:54.820476 | orchestrator | Wednesday 28 January 2026 00:27:49 +0000 (0:00:03.740) 0:00:37.361 ***** 2026-01-28 00:27:54.820487 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:27:54.820498 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:27:54.820509 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:27:54.820519 | orchestrator | 2026-01-28 00:27:54.820530 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-28 00:27:54.820541 | orchestrator | 2026-01-28 00:27:54.820552 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-28 00:27:54.820563 | orchestrator | Wednesday 28 January 2026 00:27:50 +0000 (0:00:01.449) 0:00:38.810 ***** 2026-01-28 00:27:54.820574 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:27:54.820585 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:27:54.820595 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:27:54.820606 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:27:54.820617 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:27:54.820628 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:27:54.820638 | orchestrator | ok: [testbed-manager] 2026-01-28 00:27:54.820649 | orchestrator | 2026-01-28 00:27:54.820660 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:27:54.820672 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:27:54.820683 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:27:54.820695 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:27:54.820706 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:27:54.820717 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:27:54.820728 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:27:54.820739 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:27:54.820749 | orchestrator | 2026-01-28 00:27:54.820760 | orchestrator | 2026-01-28 00:27:54.820772 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:27:54.820782 | orchestrator | Wednesday 28 January 2026 00:27:54 +0000 (0:00:04.116) 0:00:42.927 ***** 2026-01-28 00:27:54.820793 | orchestrator | =============================================================================== 2026-01-28 00:27:54.820804 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.59s 2026-01-28 00:27:54.820815 | orchestrator | Install required packages (Debian) -------------------------------------- 8.51s 2026-01-28 00:27:54.820825 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.12s 2026-01-28 00:27:54.820836 | orchestrator | Copy fact files --------------------------------------------------------- 3.74s 2026-01-28 00:27:54.820847 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.45s 2026-01-28 00:27:54.820858 | orchestrator | Create custom facts directory ------------------------------------------- 1.42s 2026-01-28 00:27:54.820875 | orchestrator | Copy fact file ---------------------------------------------------------- 1.35s 2026-01-28 00:27:55.014673 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.14s 2026-01-28 00:27:55.014745 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2026-01-28 00:27:55.014770 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.54s 2026-01-28 00:27:55.014776 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.52s 2026-01-28 00:27:55.014781 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2026-01-28 00:27:55.014786 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.23s 2026-01-28 00:27:55.014790 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-01-28 00:27:55.014805 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.17s 2026-01-28 00:27:55.014810 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2026-01-28 00:27:55.014816 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-01-28 00:27:55.014820 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.10s 2026-01-28 00:27:55.212435 | orchestrator | + osism apply bootstrap 2026-01-28 00:28:07.045542 | orchestrator | 2026-01-28 00:28:07 | INFO  | Task 25e416cd-6953-4cda-9231-d8897ad4c8f1 (bootstrap) was prepared for execution. 2026-01-28 00:28:07.045656 | orchestrator | 2026-01-28 00:28:07 | INFO  | It takes a moment until task 25e416cd-6953-4cda-9231-d8897ad4c8f1 (bootstrap) has been started and output is visible here. 2026-01-28 00:28:23.323352 | orchestrator | 2026-01-28 00:28:23.323465 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-01-28 00:28:23.323482 | orchestrator | 2026-01-28 00:28:23.323494 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-01-28 00:28:23.323505 | orchestrator | Wednesday 28 January 2026 00:28:10 +0000 (0:00:00.136) 0:00:00.136 ***** 2026-01-28 00:28:23.323517 | orchestrator | ok: [testbed-manager] 2026-01-28 00:28:23.323530 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:28:23.323541 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:28:23.323552 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:28:23.323564 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:28:23.323575 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:28:23.323586 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:28:23.323597 | orchestrator | 2026-01-28 00:28:23.323608 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-28 00:28:23.323619 | orchestrator | 2026-01-28 00:28:23.323630 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-28 00:28:23.323640 | orchestrator | Wednesday 28 January 2026 00:28:11 +0000 (0:00:00.209) 0:00:00.346 ***** 2026-01-28 00:28:23.323651 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:28:23.323662 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:28:23.323673 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:28:23.323684 | orchestrator | ok: [testbed-manager] 2026-01-28 00:28:23.323694 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:28:23.323705 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:28:23.323716 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:28:23.323727 | orchestrator | 2026-01-28 00:28:23.323737 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-01-28 00:28:23.323748 | orchestrator | 2026-01-28 00:28:23.323759 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-28 00:28:23.323770 | orchestrator | Wednesday 28 January 2026 00:28:14 +0000 (0:00:03.708) 0:00:04.054 ***** 2026-01-28 00:28:23.323782 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-01-28 00:28:23.323793 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-01-28 00:28:23.323804 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-01-28 00:28:23.323814 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-01-28 00:28:23.323825 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-28 00:28:23.323836 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-01-28 00:28:23.323875 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-28 00:28:23.323889 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-01-28 00:28:23.323902 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-28 00:28:23.323914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-28 00:28:23.323926 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-01-28 00:28:23.323938 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-01-28 00:28:23.323951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-28 00:28:23.323963 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-01-28 00:28:23.323976 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-28 00:28:23.323988 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-01-28 00:28:23.324000 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-01-28 00:28:23.324012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-28 00:28:23.324024 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-01-28 00:28:23.324037 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:28:23.324050 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-01-28 00:28:23.324062 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-28 00:28:23.324075 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-01-28 00:28:23.324087 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-01-28 00:28:23.324114 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-01-28 00:28:23.324126 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-28 00:28:23.324138 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-01-28 00:28:23.324150 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-28 00:28:23.324162 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:28:23.324174 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-28 00:28:23.324211 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-01-28 00:28:23.324223 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-28 00:28:23.324236 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:28:23.324247 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-28 00:28:23.324258 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:28:23.324269 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-01-28 00:28:23.324280 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-01-28 00:28:23.324291 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-28 00:28:23.324301 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-01-28 00:28:23.324312 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-28 00:28:23.324323 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-01-28 00:28:23.324334 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-01-28 00:28:23.324344 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-01-28 00:28:23.324355 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-28 00:28:23.324366 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:28:23.324376 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-01-28 00:28:23.324387 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-28 00:28:23.324416 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-01-28 00:28:23.324428 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-28 00:28:23.324438 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-01-28 00:28:23.324449 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-28 00:28:23.324460 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:28:23.324480 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-28 00:28:23.324491 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-28 00:28:23.324502 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-28 00:28:23.324513 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:28:23.324524 | orchestrator | 2026-01-28 00:28:23.324552 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-01-28 00:28:23.324564 | orchestrator | 2026-01-28 00:28:23.324574 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-01-28 00:28:23.324585 | orchestrator | Wednesday 28 January 2026 00:28:15 +0000 (0:00:00.513) 0:00:04.568 ***** 2026-01-28 00:28:23.324596 | orchestrator | ok: [testbed-manager] 2026-01-28 00:28:23.324607 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:28:23.324618 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:28:23.324629 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:28:23.324639 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:28:23.324650 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:28:23.324661 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:28:23.324672 | orchestrator | 2026-01-28 00:28:23.324683 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-01-28 00:28:23.324694 | orchestrator | Wednesday 28 January 2026 00:28:16 +0000 (0:00:01.247) 0:00:05.816 ***** 2026-01-28 00:28:23.324705 | orchestrator | ok: [testbed-manager] 2026-01-28 00:28:23.324716 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:28:23.324726 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:28:23.324737 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:28:23.324748 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:28:23.324759 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:28:23.324770 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:28:23.324780 | orchestrator | 2026-01-28 00:28:23.324791 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-01-28 00:28:23.324802 | orchestrator | Wednesday 28 January 2026 00:28:17 +0000 (0:00:01.298) 0:00:07.114 ***** 2026-01-28 00:28:23.324814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:28:23.324827 | orchestrator | 2026-01-28 00:28:23.324838 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-01-28 00:28:23.324849 | orchestrator | Wednesday 28 January 2026 00:28:18 +0000 (0:00:00.380) 0:00:07.494 ***** 2026-01-28 00:28:23.324860 | orchestrator | changed: [testbed-manager] 2026-01-28 00:28:23.324871 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:28:23.324882 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:28:23.324893 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:28:23.324904 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:28:23.324915 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:28:23.324925 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:28:23.324936 | orchestrator | 2026-01-28 00:28:23.324947 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-01-28 00:28:23.324958 | orchestrator | Wednesday 28 January 2026 00:28:20 +0000 (0:00:02.210) 0:00:09.705 ***** 2026-01-28 00:28:23.324969 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:28:23.324981 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:28:23.324993 | orchestrator | 2026-01-28 00:28:23.325004 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-01-28 00:28:23.325015 | orchestrator | Wednesday 28 January 2026 00:28:20 +0000 (0:00:00.287) 0:00:09.993 ***** 2026-01-28 00:28:23.325026 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:28:23.325037 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:28:23.325048 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:28:23.325066 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:28:23.325076 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:28:23.325087 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:28:23.325098 | orchestrator | 2026-01-28 00:28:23.325109 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-01-28 00:28:23.325120 | orchestrator | Wednesday 28 January 2026 00:28:22 +0000 (0:00:01.153) 0:00:11.146 ***** 2026-01-28 00:28:23.325131 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:28:23.325142 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:28:23.325152 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:28:23.325163 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:28:23.325174 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:28:23.325213 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:28:23.325232 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:28:23.325251 | orchestrator | 2026-01-28 00:28:23.325278 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-01-28 00:28:23.325296 | orchestrator | Wednesday 28 January 2026 00:28:22 +0000 (0:00:00.696) 0:00:11.843 ***** 2026-01-28 00:28:23.325309 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:28:23.325320 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:28:23.325331 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:28:23.325341 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:28:23.325352 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:28:23.325363 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:28:23.325373 | orchestrator | ok: [testbed-manager] 2026-01-28 00:28:23.325384 | orchestrator | 2026-01-28 00:28:23.325395 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-01-28 00:28:23.325406 | orchestrator | Wednesday 28 January 2026 00:28:23 +0000 (0:00:00.481) 0:00:12.324 ***** 2026-01-28 00:28:23.325417 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:28:23.325428 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:28:23.325447 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:28:36.410482 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:28:36.410589 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:28:36.410605 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:28:36.410617 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:28:36.410629 | orchestrator | 2026-01-28 00:28:36.410641 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-01-28 00:28:36.410654 | orchestrator | Wednesday 28 January 2026 00:28:23 +0000 (0:00:00.220) 0:00:12.545 ***** 2026-01-28 00:28:36.410667 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:28:36.410695 | orchestrator | 2026-01-28 00:28:36.410706 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-01-28 00:28:36.410719 | orchestrator | Wednesday 28 January 2026 00:28:23 +0000 (0:00:00.290) 0:00:12.835 ***** 2026-01-28 00:28:36.410730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:28:36.410741 | orchestrator | 2026-01-28 00:28:36.410753 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-01-28 00:28:36.410764 | orchestrator | Wednesday 28 January 2026 00:28:24 +0000 (0:00:00.318) 0:00:13.154 ***** 2026-01-28 00:28:36.410775 | orchestrator | ok: [testbed-manager] 2026-01-28 00:28:36.410786 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:28:36.410798 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:28:36.410809 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:28:36.410820 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:28:36.410830 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:28:36.410841 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:28:36.410880 | orchestrator | 2026-01-28 00:28:36.410892 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-01-28 00:28:36.410903 | orchestrator | Wednesday 28 January 2026 00:28:25 +0000 (0:00:01.673) 0:00:14.827 ***** 2026-01-28 00:28:36.410914 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:28:36.410925 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:28:36.410936 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:28:36.410947 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:28:36.410958 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:28:36.410968 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:28:36.410979 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:28:36.410991 | orchestrator | 2026-01-28 00:28:36.411005 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-01-28 00:28:36.411017 | orchestrator | Wednesday 28 January 2026 00:28:25 +0000 (0:00:00.225) 0:00:15.053 ***** 2026-01-28 00:28:36.411029 | orchestrator | ok: [testbed-manager] 2026-01-28 00:28:36.411041 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:28:36.411054 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:28:36.411066 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:28:36.411078 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:28:36.411091 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:28:36.411103 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:28:36.411116 | orchestrator | 2026-01-28 00:28:36.411127 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-01-28 00:28:36.411138 | orchestrator | Wednesday 28 January 2026 00:28:26 +0000 (0:00:00.593) 0:00:15.646 ***** 2026-01-28 00:28:36.411149 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:28:36.411159 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:28:36.411170 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:28:36.411209 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:28:36.411222 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:28:36.411233 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:28:36.411244 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:28:36.411254 | orchestrator | 2026-01-28 00:28:36.411266 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-01-28 00:28:36.411278 | orchestrator | Wednesday 28 January 2026 00:28:26 +0000 (0:00:00.338) 0:00:15.985 ***** 2026-01-28 00:28:36.411289 | orchestrator | ok: [testbed-manager] 2026-01-28 00:28:36.411300 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:28:36.411310 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:28:36.411321 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:28:36.411332 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:28:36.411342 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:28:36.411353 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:28:36.411364 | orchestrator | 2026-01-28 00:28:36.411374 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-01-28 00:28:36.411385 | orchestrator | Wednesday 28 January 2026 00:28:27 +0000 (0:00:00.597) 0:00:16.582 ***** 2026-01-28 00:28:36.411396 | orchestrator | ok: [testbed-manager] 2026-01-28 00:28:36.411406 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:28:36.411417 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:28:36.411428 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:28:36.411438 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:28:36.411449 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:28:36.411469 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:28:36.411480 | orchestrator | 2026-01-28 00:28:36.411491 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-01-28 00:28:36.411502 | orchestrator | Wednesday 28 January 2026 00:28:28 +0000 (0:00:01.218) 0:00:17.801 ***** 2026-01-28 00:28:36.411512 | orchestrator | ok: [testbed-manager] 2026-01-28 00:28:36.411523 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:28:36.411534 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:28:36.411545 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:28:36.411556 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:28:36.411575 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:28:36.411586 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:28:36.411597 | orchestrator | 2026-01-28 00:28:36.411608 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-01-28 00:28:36.411619 | orchestrator | Wednesday 28 January 2026 00:28:29 +0000 (0:00:01.130) 0:00:18.931 ***** 2026-01-28 00:28:36.411647 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:28:36.411660 | orchestrator | 2026-01-28 00:28:36.411671 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-01-28 00:28:36.411682 | orchestrator | Wednesday 28 January 2026 00:28:30 +0000 (0:00:00.300) 0:00:19.231 ***** 2026-01-28 00:28:36.411693 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:28:36.411704 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:28:36.411715 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:28:36.411726 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:28:36.411736 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:28:36.411747 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:28:36.411758 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:28:36.411769 | orchestrator | 2026-01-28 00:28:36.411780 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-01-28 00:28:36.411790 | orchestrator | Wednesday 28 January 2026 00:28:31 +0000 (0:00:01.348) 0:00:20.580 ***** 2026-01-28 00:28:36.411801 | orchestrator | ok: [testbed-manager] 2026-01-28 00:28:36.411812 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:28:36.411823 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:28:36.411834 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:28:36.411845 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:28:36.411856 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:28:36.411866 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:28:36.411877 | orchestrator | 2026-01-28 00:28:36.411888 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-01-28 00:28:36.411899 | orchestrator | Wednesday 28 January 2026 00:28:31 +0000 (0:00:00.244) 0:00:20.825 ***** 2026-01-28 00:28:36.411910 | orchestrator | ok: [testbed-manager] 2026-01-28 00:28:36.411920 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:28:36.411931 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:28:36.411942 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:28:36.411953 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:28:36.411964 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:28:36.411975 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:28:36.411985 | orchestrator | 2026-01-28 00:28:36.411996 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-01-28 00:28:36.412014 | orchestrator | Wednesday 28 January 2026 00:28:31 +0000 (0:00:00.236) 0:00:21.061 ***** 2026-01-28 00:28:36.412032 | orchestrator | ok: [testbed-manager] 2026-01-28 00:28:36.412050 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:28:36.412068 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:28:36.412087 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:28:36.412104 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:28:36.412116 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:28:36.412126 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:28:36.412137 | orchestrator | 2026-01-28 00:28:36.412148 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-01-28 00:28:36.412159 | orchestrator | Wednesday 28 January 2026 00:28:32 +0000 (0:00:00.259) 0:00:21.320 ***** 2026-01-28 00:28:36.412170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:28:36.412224 | orchestrator | 2026-01-28 00:28:36.412245 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-01-28 00:28:36.412274 | orchestrator | Wednesday 28 January 2026 00:28:32 +0000 (0:00:00.277) 0:00:21.598 ***** 2026-01-28 00:28:36.412286 | orchestrator | ok: [testbed-manager] 2026-01-28 00:28:36.412296 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:28:36.412307 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:28:36.412318 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:28:36.412329 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:28:36.412340 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:28:36.412351 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:28:36.412361 | orchestrator | 2026-01-28 00:28:36.412372 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-01-28 00:28:36.412383 | orchestrator | Wednesday 28 January 2026 00:28:32 +0000 (0:00:00.537) 0:00:22.135 ***** 2026-01-28 00:28:36.412393 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:28:36.412405 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:28:36.412415 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:28:36.412426 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:28:36.412437 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:28:36.412448 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:28:36.412458 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:28:36.412469 | orchestrator | 2026-01-28 00:28:36.412480 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-01-28 00:28:36.412491 | orchestrator | Wednesday 28 January 2026 00:28:33 +0000 (0:00:00.227) 0:00:22.362 ***** 2026-01-28 00:28:36.412501 | orchestrator | ok: [testbed-manager] 2026-01-28 00:28:36.412512 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:28:36.412523 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:28:36.412533 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:28:36.412544 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:28:36.412555 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:28:36.412566 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:28:36.412576 | orchestrator | 2026-01-28 00:28:36.412587 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-01-28 00:28:36.412598 | orchestrator | Wednesday 28 January 2026 00:28:34 +0000 (0:00:01.207) 0:00:23.570 ***** 2026-01-28 00:28:36.412609 | orchestrator | ok: [testbed-manager] 2026-01-28 00:28:36.412620 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:28:36.412631 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:28:36.412642 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:28:36.412653 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:28:36.412663 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:28:36.412674 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:28:36.412685 | orchestrator | 2026-01-28 00:28:36.412696 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-01-28 00:28:36.412706 | orchestrator | Wednesday 28 January 2026 00:28:35 +0000 (0:00:00.658) 0:00:24.229 ***** 2026-01-28 00:28:36.412717 | orchestrator | ok: [testbed-manager] 2026-01-28 00:28:36.412728 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:28:36.412739 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:28:36.412749 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:28:36.412770 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:29:19.498871 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:29:19.499008 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:29:19.499026 | orchestrator | 2026-01-28 00:29:19.499039 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-01-28 00:29:19.499052 | orchestrator | Wednesday 28 January 2026 00:28:36 +0000 (0:00:01.302) 0:00:25.532 ***** 2026-01-28 00:29:19.499063 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:29:19.499076 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:29:19.499087 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:29:19.499099 | orchestrator | changed: [testbed-manager] 2026-01-28 00:29:19.499110 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:29:19.499122 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:29:19.499133 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:29:19.499144 | orchestrator | 2026-01-28 00:29:19.499155 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-01-28 00:29:19.499226 | orchestrator | Wednesday 28 January 2026 00:28:54 +0000 (0:00:17.821) 0:00:43.354 ***** 2026-01-28 00:29:19.499248 | orchestrator | ok: [testbed-manager] 2026-01-28 00:29:19.499267 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:29:19.499284 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:29:19.499301 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:29:19.499320 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:29:19.499339 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:29:19.499358 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:29:19.499377 | orchestrator | 2026-01-28 00:29:19.499397 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-01-28 00:29:19.499415 | orchestrator | Wednesday 28 January 2026 00:28:54 +0000 (0:00:00.213) 0:00:43.568 ***** 2026-01-28 00:29:19.499430 | orchestrator | ok: [testbed-manager] 2026-01-28 00:29:19.499441 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:29:19.499452 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:29:19.499463 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:29:19.499474 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:29:19.499484 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:29:19.499495 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:29:19.499506 | orchestrator | 2026-01-28 00:29:19.499517 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-01-28 00:29:19.499527 | orchestrator | Wednesday 28 January 2026 00:28:54 +0000 (0:00:00.242) 0:00:43.810 ***** 2026-01-28 00:29:19.499538 | orchestrator | ok: [testbed-manager] 2026-01-28 00:29:19.499549 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:29:19.499559 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:29:19.499571 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:29:19.499582 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:29:19.499593 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:29:19.499603 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:29:19.499614 | orchestrator | 2026-01-28 00:29:19.499625 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-01-28 00:29:19.499636 | orchestrator | Wednesday 28 January 2026 00:28:54 +0000 (0:00:00.228) 0:00:44.038 ***** 2026-01-28 00:29:19.499648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:29:19.499663 | orchestrator | 2026-01-28 00:29:19.499683 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-01-28 00:29:19.499701 | orchestrator | Wednesday 28 January 2026 00:28:55 +0000 (0:00:00.278) 0:00:44.317 ***** 2026-01-28 00:29:19.499719 | orchestrator | ok: [testbed-manager] 2026-01-28 00:29:19.499737 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:29:19.499755 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:29:19.499773 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:29:19.499793 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:29:19.499811 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:29:19.499830 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:29:19.499846 | orchestrator | 2026-01-28 00:29:19.499865 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-01-28 00:29:19.499884 | orchestrator | Wednesday 28 January 2026 00:28:57 +0000 (0:00:01.937) 0:00:46.255 ***** 2026-01-28 00:29:19.499903 | orchestrator | changed: [testbed-manager] 2026-01-28 00:29:19.499922 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:29:19.499940 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:29:19.499959 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:29:19.499976 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:29:19.499993 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:29:19.500013 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:29:19.500033 | orchestrator | 2026-01-28 00:29:19.500052 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-01-28 00:29:19.500071 | orchestrator | Wednesday 28 January 2026 00:28:58 +0000 (0:00:01.129) 0:00:47.385 ***** 2026-01-28 00:29:19.500103 | orchestrator | ok: [testbed-manager] 2026-01-28 00:29:19.500122 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:29:19.500139 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:29:19.500157 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:29:19.500176 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:29:19.500261 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:29:19.500280 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:29:19.500297 | orchestrator | 2026-01-28 00:29:19.500315 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-01-28 00:29:19.500332 | orchestrator | Wednesday 28 January 2026 00:28:59 +0000 (0:00:00.849) 0:00:48.235 ***** 2026-01-28 00:29:19.500363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:29:19.500386 | orchestrator | 2026-01-28 00:29:19.500403 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-01-28 00:29:19.500424 | orchestrator | Wednesday 28 January 2026 00:28:59 +0000 (0:00:00.300) 0:00:48.535 ***** 2026-01-28 00:29:19.500442 | orchestrator | changed: [testbed-manager] 2026-01-28 00:29:19.500461 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:29:19.500480 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:29:19.500497 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:29:19.500517 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:29:19.500538 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:29:19.500560 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:29:19.500584 | orchestrator | 2026-01-28 00:29:19.500632 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-01-28 00:29:19.500653 | orchestrator | Wednesday 28 January 2026 00:29:00 +0000 (0:00:01.115) 0:00:49.651 ***** 2026-01-28 00:29:19.500672 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:29:19.500690 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:29:19.500708 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:29:19.500726 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:29:19.500743 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:29:19.500761 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:29:19.500778 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:29:19.500795 | orchestrator | 2026-01-28 00:29:19.500812 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-01-28 00:29:19.500829 | orchestrator | Wednesday 28 January 2026 00:29:00 +0000 (0:00:00.206) 0:00:49.857 ***** 2026-01-28 00:29:19.500847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:29:19.500866 | orchestrator | 2026-01-28 00:29:19.500884 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-01-28 00:29:19.500902 | orchestrator | Wednesday 28 January 2026 00:29:00 +0000 (0:00:00.251) 0:00:50.109 ***** 2026-01-28 00:29:19.500920 | orchestrator | ok: [testbed-manager] 2026-01-28 00:29:19.500938 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:29:19.500955 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:29:19.500972 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:29:19.500990 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:29:19.501009 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:29:19.501027 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:29:19.501045 | orchestrator | 2026-01-28 00:29:19.501063 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-01-28 00:29:19.501081 | orchestrator | Wednesday 28 January 2026 00:29:02 +0000 (0:00:01.914) 0:00:52.023 ***** 2026-01-28 00:29:19.501100 | orchestrator | changed: [testbed-manager] 2026-01-28 00:29:19.501117 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:29:19.501136 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:29:19.501153 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:29:19.501224 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:29:19.501245 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:29:19.501262 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:29:19.501281 | orchestrator | 2026-01-28 00:29:19.501300 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-01-28 00:29:19.501318 | orchestrator | Wednesday 28 January 2026 00:29:04 +0000 (0:00:01.196) 0:00:53.219 ***** 2026-01-28 00:29:19.501339 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:29:19.501357 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:29:19.501375 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:29:19.501395 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:29:19.501413 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:29:19.501432 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:29:19.501450 | orchestrator | changed: [testbed-manager] 2026-01-28 00:29:19.501469 | orchestrator | 2026-01-28 00:29:19.501489 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-01-28 00:29:19.501508 | orchestrator | Wednesday 28 January 2026 00:29:16 +0000 (0:00:12.000) 0:01:05.220 ***** 2026-01-28 00:29:19.501528 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:29:19.501547 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:29:19.501566 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:29:19.501585 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:29:19.501604 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:29:19.501622 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:29:19.501640 | orchestrator | ok: [testbed-manager] 2026-01-28 00:29:19.501660 | orchestrator | 2026-01-28 00:29:19.501680 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-01-28 00:29:19.501699 | orchestrator | Wednesday 28 January 2026 00:29:17 +0000 (0:00:01.578) 0:01:06.798 ***** 2026-01-28 00:29:19.501712 | orchestrator | ok: [testbed-manager] 2026-01-28 00:29:19.501722 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:29:19.501731 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:29:19.501741 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:29:19.501750 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:29:19.501760 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:29:19.501769 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:29:19.501779 | orchestrator | 2026-01-28 00:29:19.501788 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-01-28 00:29:19.501798 | orchestrator | Wednesday 28 January 2026 00:29:18 +0000 (0:00:00.963) 0:01:07.761 ***** 2026-01-28 00:29:19.501808 | orchestrator | ok: [testbed-manager] 2026-01-28 00:29:19.501817 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:29:19.501827 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:29:19.501837 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:29:19.501854 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:29:19.501871 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:29:19.501888 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:29:19.501905 | orchestrator | 2026-01-28 00:29:19.501922 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-01-28 00:29:19.501940 | orchestrator | Wednesday 28 January 2026 00:29:18 +0000 (0:00:00.278) 0:01:08.040 ***** 2026-01-28 00:29:19.501957 | orchestrator | ok: [testbed-manager] 2026-01-28 00:29:19.501978 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:29:19.501989 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:29:19.501998 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:29:19.502008 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:29:19.502076 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:29:19.502086 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:29:19.502096 | orchestrator | 2026-01-28 00:29:19.502106 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-01-28 00:29:19.502116 | orchestrator | Wednesday 28 January 2026 00:29:19 +0000 (0:00:00.253) 0:01:08.294 ***** 2026-01-28 00:29:19.502128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:29:19.502150 | orchestrator | 2026-01-28 00:29:19.502177 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-01-28 00:31:30.313740 | orchestrator | Wednesday 28 January 2026 00:29:19 +0000 (0:00:00.332) 0:01:08.627 ***** 2026-01-28 00:31:30.313851 | orchestrator | ok: [testbed-manager] 2026-01-28 00:31:30.313868 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:31:30.313877 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:31:30.313887 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:31:30.313897 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:31:30.313907 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:31:30.313916 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:31:30.313926 | orchestrator | 2026-01-28 00:31:30.313936 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-01-28 00:31:30.313946 | orchestrator | Wednesday 28 January 2026 00:29:21 +0000 (0:00:01.876) 0:01:10.503 ***** 2026-01-28 00:31:30.313956 | orchestrator | changed: [testbed-manager] 2026-01-28 00:31:30.313967 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:31:30.313977 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:31:30.313986 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:31:30.313996 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:31:30.314005 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:31:30.314061 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:31:30.314072 | orchestrator | 2026-01-28 00:31:30.314082 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-01-28 00:31:30.314093 | orchestrator | Wednesday 28 January 2026 00:29:21 +0000 (0:00:00.611) 0:01:11.114 ***** 2026-01-28 00:31:30.314103 | orchestrator | ok: [testbed-manager] 2026-01-28 00:31:30.314113 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:31:30.314123 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:31:30.314132 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:31:30.314142 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:31:30.314154 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:31:30.314164 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:31:30.314175 | orchestrator | 2026-01-28 00:31:30.314204 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-01-28 00:31:30.314214 | orchestrator | Wednesday 28 January 2026 00:29:22 +0000 (0:00:00.293) 0:01:11.408 ***** 2026-01-28 00:31:30.314223 | orchestrator | ok: [testbed-manager] 2026-01-28 00:31:30.314233 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:31:30.314244 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:31:30.314255 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:31:30.314266 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:31:30.314275 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:31:30.314285 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:31:30.314294 | orchestrator | 2026-01-28 00:31:30.314304 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-01-28 00:31:30.314313 | orchestrator | Wednesday 28 January 2026 00:29:23 +0000 (0:00:01.333) 0:01:12.742 ***** 2026-01-28 00:31:30.314323 | orchestrator | changed: [testbed-manager] 2026-01-28 00:31:30.314333 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:31:30.314342 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:31:30.314351 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:31:30.314361 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:31:30.314375 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:31:30.314385 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:31:30.314394 | orchestrator | 2026-01-28 00:31:30.314404 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-01-28 00:31:30.314413 | orchestrator | Wednesday 28 January 2026 00:29:25 +0000 (0:00:01.994) 0:01:14.736 ***** 2026-01-28 00:31:30.314422 | orchestrator | ok: [testbed-manager] 2026-01-28 00:31:30.314432 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:31:30.314441 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:31:30.314450 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:31:30.314460 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:31:30.314499 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:31:30.314509 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:31:30.314518 | orchestrator | 2026-01-28 00:31:30.314527 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-01-28 00:31:30.314536 | orchestrator | Wednesday 28 January 2026 00:29:28 +0000 (0:00:02.601) 0:01:17.338 ***** 2026-01-28 00:31:30.314545 | orchestrator | ok: [testbed-manager] 2026-01-28 00:31:30.314554 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:31:30.314563 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:31:30.314572 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:31:30.314582 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:31:30.314591 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:31:30.314601 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:31:30.314610 | orchestrator | 2026-01-28 00:31:30.314619 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-01-28 00:31:30.314628 | orchestrator | Wednesday 28 January 2026 00:30:02 +0000 (0:00:34.178) 0:01:51.516 ***** 2026-01-28 00:31:30.314638 | orchestrator | changed: [testbed-manager] 2026-01-28 00:31:30.314647 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:31:30.314656 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:31:30.314665 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:31:30.314674 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:31:30.314683 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:31:30.314693 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:31:30.314702 | orchestrator | 2026-01-28 00:31:30.314712 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-01-28 00:31:30.314721 | orchestrator | Wednesday 28 January 2026 00:31:14 +0000 (0:01:11.998) 0:03:03.514 ***** 2026-01-28 00:31:30.314730 | orchestrator | changed: [testbed-manager] 2026-01-28 00:31:30.314739 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:31:30.314748 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:31:30.314758 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:31:30.314767 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:31:30.314776 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:31:30.314785 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:31:30.314794 | orchestrator | 2026-01-28 00:31:30.314803 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-01-28 00:31:30.314813 | orchestrator | Wednesday 28 January 2026 00:31:16 +0000 (0:00:01.956) 0:03:05.470 ***** 2026-01-28 00:31:30.314823 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:31:30.314831 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:31:30.314840 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:31:30.314850 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:31:30.314859 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:31:30.314868 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:31:30.314877 | orchestrator | changed: [testbed-manager] 2026-01-28 00:31:30.314886 | orchestrator | 2026-01-28 00:31:30.314896 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-01-28 00:31:30.314926 | orchestrator | Wednesday 28 January 2026 00:31:28 +0000 (0:00:12.623) 0:03:18.093 ***** 2026-01-28 00:31:30.314951 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-01-28 00:31:30.314982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-01-28 00:31:30.315007 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-01-28 00:31:30.315019 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-28 00:31:30.315028 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-01-28 00:31:30.315038 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-01-28 00:31:30.315047 | orchestrator | 2026-01-28 00:31:30.315057 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-01-28 00:31:30.315066 | orchestrator | Wednesday 28 January 2026 00:31:29 +0000 (0:00:00.557) 0:03:18.651 ***** 2026-01-28 00:31:30.315076 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-28 00:31:30.315085 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:31:30.315095 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-28 00:31:30.315105 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-28 00:31:30.315114 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:31:30.315123 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-01-28 00:31:30.315132 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:31:30.315141 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:31:30.315150 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-28 00:31:30.315159 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-28 00:31:30.315173 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-28 00:31:30.315213 | orchestrator | 2026-01-28 00:31:30.315225 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-01-28 00:31:30.315234 | orchestrator | Wednesday 28 January 2026 00:31:30 +0000 (0:00:00.712) 0:03:19.363 ***** 2026-01-28 00:31:30.315243 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-28 00:31:30.315254 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-28 00:31:30.315264 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-28 00:31:30.315274 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-28 00:31:30.315283 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-28 00:31:30.315302 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-28 00:31:38.641887 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-28 00:31:38.641985 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-28 00:31:38.641997 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-28 00:31:38.642008 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-28 00:31:38.642063 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-28 00:31:38.642072 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-28 00:31:38.642079 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:31:38.642088 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-28 00:31:38.642094 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-28 00:31:38.642101 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-28 00:31:38.642108 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-28 00:31:38.642114 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-28 00:31:38.642121 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-28 00:31:38.642127 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-28 00:31:38.642133 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-28 00:31:38.642139 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-28 00:31:38.642145 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-28 00:31:38.642151 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-28 00:31:38.642157 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-28 00:31:38.642164 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-28 00:31:38.642173 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-28 00:31:38.642219 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-28 00:31:38.642229 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-28 00:31:38.642236 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:31:38.642242 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-28 00:31:38.642249 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-28 00:31:38.642255 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-01-28 00:31:38.642261 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-01-28 00:31:38.642267 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-01-28 00:31:38.642273 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-01-28 00:31:38.642280 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-01-28 00:31:38.642286 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-01-28 00:31:38.642292 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-01-28 00:31:38.642321 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-01-28 00:31:38.642339 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-01-28 00:31:38.642345 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-01-28 00:31:38.642352 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:31:38.642358 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:31:38.642364 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-28 00:31:38.642370 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-28 00:31:38.642376 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-01-28 00:31:38.642383 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-28 00:31:38.642389 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-28 00:31:38.642410 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-01-28 00:31:38.642416 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-28 00:31:38.642423 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-28 00:31:38.642429 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-01-28 00:31:38.642436 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-28 00:31:38.642443 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-28 00:31:38.642450 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-01-28 00:31:38.642457 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-28 00:31:38.642464 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-28 00:31:38.642471 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-01-28 00:31:38.642478 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-28 00:31:38.642485 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-28 00:31:38.642492 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-01-28 00:31:38.642499 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-28 00:31:38.642506 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-28 00:31:38.642513 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-28 00:31:38.642520 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-28 00:31:38.642527 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-01-28 00:31:38.642534 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-28 00:31:38.642541 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-28 00:31:38.642548 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-28 00:31:38.642556 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-28 00:31:38.642563 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-01-28 00:31:38.642570 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-01-28 00:31:38.642582 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-01-28 00:31:38.642589 | orchestrator | 2026-01-28 00:31:38.642597 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-01-28 00:31:38.642604 | orchestrator | Wednesday 28 January 2026 00:31:37 +0000 (0:00:07.109) 0:03:26.473 ***** 2026-01-28 00:31:38.642611 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-28 00:31:38.642618 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-28 00:31:38.642625 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-28 00:31:38.642632 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-28 00:31:38.642639 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-28 00:31:38.642645 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-28 00:31:38.642652 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-01-28 00:31:38.642659 | orchestrator | 2026-01-28 00:31:38.642666 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-01-28 00:31:38.642673 | orchestrator | Wednesday 28 January 2026 00:31:38 +0000 (0:00:00.698) 0:03:27.171 ***** 2026-01-28 00:31:38.642684 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-28 00:31:38.642692 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:31:38.642699 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-28 00:31:38.642706 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-28 00:31:38.642713 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:31:38.642720 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:31:38.642727 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-28 00:31:38.642734 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:31:38.642741 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-28 00:31:38.642748 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-28 00:31:38.642759 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-28 00:31:55.526118 | orchestrator | 2026-01-28 00:31:55.526262 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-01-28 00:31:55.526288 | orchestrator | Wednesday 28 January 2026 00:31:38 +0000 (0:00:00.589) 0:03:27.761 ***** 2026-01-28 00:31:55.526306 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-28 00:31:55.526325 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-28 00:31:55.526336 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:31:55.526348 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-28 00:31:55.526359 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:31:55.526369 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-01-28 00:31:55.526380 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:31:55.526390 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:31:55.526400 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-28 00:31:55.526410 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-28 00:31:55.526449 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-01-28 00:31:55.526459 | orchestrator | 2026-01-28 00:31:55.526469 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-01-28 00:31:55.526479 | orchestrator | Wednesday 28 January 2026 00:31:40 +0000 (0:00:01.597) 0:03:29.358 ***** 2026-01-28 00:31:55.526489 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-28 00:31:55.526499 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:31:55.526509 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-28 00:31:55.526519 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-28 00:31:55.526528 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:31:55.526538 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:31:55.526548 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-01-28 00:31:55.526558 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:31:55.526568 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-28 00:31:55.526578 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-28 00:31:55.526588 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-01-28 00:31:55.526598 | orchestrator | 2026-01-28 00:31:55.526610 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-01-28 00:31:55.526621 | orchestrator | Wednesday 28 January 2026 00:31:42 +0000 (0:00:02.635) 0:03:31.993 ***** 2026-01-28 00:31:55.526632 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:31:55.526644 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:31:55.526655 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:31:55.526667 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:31:55.526678 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:31:55.526689 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:31:55.526700 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:31:55.526712 | orchestrator | 2026-01-28 00:31:55.526723 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-01-28 00:31:55.526734 | orchestrator | Wednesday 28 January 2026 00:31:43 +0000 (0:00:00.349) 0:03:32.342 ***** 2026-01-28 00:31:55.526745 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:31:55.526757 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:31:55.526769 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:31:55.526780 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:31:55.526791 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:31:55.526802 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:31:55.526813 | orchestrator | ok: [testbed-manager] 2026-01-28 00:31:55.526824 | orchestrator | 2026-01-28 00:31:55.526836 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-01-28 00:31:55.526847 | orchestrator | Wednesday 28 January 2026 00:31:48 +0000 (0:00:05.731) 0:03:38.074 ***** 2026-01-28 00:31:55.526858 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-01-28 00:31:55.526869 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-01-28 00:31:55.526881 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:31:55.526892 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:31:55.526903 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-01-28 00:31:55.526920 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-01-28 00:31:55.526937 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:31:55.526954 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:31:55.526970 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-01-28 00:31:55.526986 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-01-28 00:31:55.527022 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:31:55.527051 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:31:55.527068 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-01-28 00:31:55.527084 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:31:55.527099 | orchestrator | 2026-01-28 00:31:55.527115 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-01-28 00:31:55.527130 | orchestrator | Wednesday 28 January 2026 00:31:49 +0000 (0:00:00.316) 0:03:38.390 ***** 2026-01-28 00:31:55.527146 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-01-28 00:31:55.527163 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-01-28 00:31:55.527264 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-01-28 00:31:55.527304 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-01-28 00:31:55.527316 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-01-28 00:31:55.527326 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-01-28 00:31:55.527335 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-01-28 00:31:55.527345 | orchestrator | 2026-01-28 00:31:55.527355 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-01-28 00:31:55.527365 | orchestrator | Wednesday 28 January 2026 00:31:50 +0000 (0:00:01.190) 0:03:39.581 ***** 2026-01-28 00:31:55.527377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:31:55.527389 | orchestrator | 2026-01-28 00:31:55.527399 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-01-28 00:31:55.527409 | orchestrator | Wednesday 28 January 2026 00:31:50 +0000 (0:00:00.539) 0:03:40.121 ***** 2026-01-28 00:31:55.527418 | orchestrator | ok: [testbed-manager] 2026-01-28 00:31:55.527434 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:31:55.527451 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:31:55.527467 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:31:55.527483 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:31:55.527494 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:31:55.527503 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:31:55.527513 | orchestrator | 2026-01-28 00:31:55.527523 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-01-28 00:31:55.527532 | orchestrator | Wednesday 28 January 2026 00:31:52 +0000 (0:00:01.375) 0:03:41.497 ***** 2026-01-28 00:31:55.527542 | orchestrator | ok: [testbed-manager] 2026-01-28 00:31:55.527551 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:31:55.527561 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:31:55.527570 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:31:55.527580 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:31:55.527589 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:31:55.527599 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:31:55.527608 | orchestrator | 2026-01-28 00:31:55.527618 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-01-28 00:31:55.527627 | orchestrator | Wednesday 28 January 2026 00:31:53 +0000 (0:00:00.708) 0:03:42.205 ***** 2026-01-28 00:31:55.527637 | orchestrator | changed: [testbed-manager] 2026-01-28 00:31:55.527653 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:31:55.527669 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:31:55.527685 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:31:55.527700 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:31:55.527716 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:31:55.527730 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:31:55.527747 | orchestrator | 2026-01-28 00:31:55.527764 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-01-28 00:31:55.527781 | orchestrator | Wednesday 28 January 2026 00:31:53 +0000 (0:00:00.698) 0:03:42.903 ***** 2026-01-28 00:31:55.527796 | orchestrator | ok: [testbed-manager] 2026-01-28 00:31:55.527811 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:31:55.527821 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:31:55.527831 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:31:55.527851 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:31:55.527860 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:31:55.527870 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:31:55.527879 | orchestrator | 2026-01-28 00:31:55.527889 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-01-28 00:31:55.527899 | orchestrator | Wednesday 28 January 2026 00:31:54 +0000 (0:00:00.668) 0:03:43.572 ***** 2026-01-28 00:31:55.527913 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769558727.39428, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 00:31:55.527933 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769558716.0573792, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 00:31:55.527945 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769558732.8909485, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 00:31:55.527967 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769558729.5282252, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 00:32:00.487129 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769558739.5151434, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 00:32:00.487313 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769558723.8680143, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 00:32:00.487332 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1769558741.9933877, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 00:32:00.487368 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 00:32:00.487444 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 00:32:00.487458 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 00:32:00.487470 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 00:32:00.487512 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 00:32:00.487525 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 00:32:00.487537 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 00:32:00.487557 | orchestrator | 2026-01-28 00:32:00.487571 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-01-28 00:32:00.487583 | orchestrator | Wednesday 28 January 2026 00:31:55 +0000 (0:00:01.076) 0:03:44.649 ***** 2026-01-28 00:32:00.487595 | orchestrator | changed: [testbed-manager] 2026-01-28 00:32:00.487609 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:32:00.487620 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:32:00.487631 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:32:00.487645 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:32:00.487659 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:32:00.487678 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:32:00.487698 | orchestrator | 2026-01-28 00:32:00.487717 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-01-28 00:32:00.487734 | orchestrator | Wednesday 28 January 2026 00:31:56 +0000 (0:00:01.177) 0:03:45.826 ***** 2026-01-28 00:32:00.487752 | orchestrator | changed: [testbed-manager] 2026-01-28 00:32:00.487773 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:32:00.487794 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:32:00.487814 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:32:00.487833 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:32:00.487849 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:32:00.487862 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:32:00.487875 | orchestrator | 2026-01-28 00:32:00.487888 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-01-28 00:32:00.487901 | orchestrator | Wednesday 28 January 2026 00:31:57 +0000 (0:00:01.191) 0:03:47.018 ***** 2026-01-28 00:32:00.487913 | orchestrator | changed: [testbed-manager] 2026-01-28 00:32:00.487925 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:32:00.487938 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:32:00.487950 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:32:00.487962 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:32:00.487975 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:32:00.487994 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:32:00.488005 | orchestrator | 2026-01-28 00:32:00.488016 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-01-28 00:32:00.488027 | orchestrator | Wednesday 28 January 2026 00:31:59 +0000 (0:00:01.141) 0:03:48.160 ***** 2026-01-28 00:32:00.488038 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:32:00.488048 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:32:00.488059 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:32:00.488070 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:32:00.488080 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:32:00.488091 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:32:00.488102 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:32:00.488113 | orchestrator | 2026-01-28 00:32:00.488124 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-01-28 00:32:00.488134 | orchestrator | Wednesday 28 January 2026 00:31:59 +0000 (0:00:00.301) 0:03:48.461 ***** 2026-01-28 00:32:00.488145 | orchestrator | ok: [testbed-manager] 2026-01-28 00:32:00.488156 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:32:00.488167 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:32:00.488178 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:32:00.488217 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:32:00.488228 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:32:00.488239 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:32:00.488250 | orchestrator | 2026-01-28 00:32:00.488261 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-01-28 00:32:00.488282 | orchestrator | Wednesday 28 January 2026 00:32:00 +0000 (0:00:00.720) 0:03:49.181 ***** 2026-01-28 00:32:00.488295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:32:00.488308 | orchestrator | 2026-01-28 00:32:00.488320 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-01-28 00:32:00.488340 | orchestrator | Wednesday 28 January 2026 00:32:00 +0000 (0:00:00.434) 0:03:49.616 ***** 2026-01-28 00:33:19.549481 | orchestrator | ok: [testbed-manager] 2026-01-28 00:33:19.549588 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:33:19.549606 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:33:19.549619 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:33:19.549630 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:33:19.549642 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:33:19.549653 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:33:19.549664 | orchestrator | 2026-01-28 00:33:19.549676 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-01-28 00:33:19.549689 | orchestrator | Wednesday 28 January 2026 00:32:09 +0000 (0:00:08.743) 0:03:58.360 ***** 2026-01-28 00:33:19.549700 | orchestrator | ok: [testbed-manager] 2026-01-28 00:33:19.549711 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:33:19.549723 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:33:19.549734 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:33:19.549745 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:33:19.549755 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:33:19.549766 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:33:19.549777 | orchestrator | 2026-01-28 00:33:19.549788 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-01-28 00:33:19.549800 | orchestrator | Wednesday 28 January 2026 00:32:10 +0000 (0:00:01.302) 0:03:59.663 ***** 2026-01-28 00:33:19.549811 | orchestrator | ok: [testbed-manager] 2026-01-28 00:33:19.549822 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:33:19.549833 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:33:19.549844 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:33:19.549854 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:33:19.549892 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:33:19.549903 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:33:19.549913 | orchestrator | 2026-01-28 00:33:19.549924 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-01-28 00:33:19.549935 | orchestrator | Wednesday 28 January 2026 00:32:11 +0000 (0:00:01.344) 0:04:01.007 ***** 2026-01-28 00:33:19.549946 | orchestrator | ok: [testbed-manager] 2026-01-28 00:33:19.549958 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:33:19.549969 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:33:19.549980 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:33:19.549991 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:33:19.550002 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:33:19.550012 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:33:19.550091 | orchestrator | 2026-01-28 00:33:19.550102 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-01-28 00:33:19.550115 | orchestrator | Wednesday 28 January 2026 00:32:12 +0000 (0:00:00.365) 0:04:01.372 ***** 2026-01-28 00:33:19.550125 | orchestrator | ok: [testbed-manager] 2026-01-28 00:33:19.550136 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:33:19.550147 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:33:19.550158 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:33:19.550168 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:33:19.550179 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:33:19.550266 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:33:19.550277 | orchestrator | 2026-01-28 00:33:19.550288 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-01-28 00:33:19.550299 | orchestrator | Wednesday 28 January 2026 00:32:12 +0000 (0:00:00.341) 0:04:01.713 ***** 2026-01-28 00:33:19.550331 | orchestrator | ok: [testbed-manager] 2026-01-28 00:33:19.550342 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:33:19.550353 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:33:19.550364 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:33:19.550389 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:33:19.550400 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:33:19.550412 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:33:19.550422 | orchestrator | 2026-01-28 00:33:19.550433 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-01-28 00:33:19.550444 | orchestrator | Wednesday 28 January 2026 00:32:12 +0000 (0:00:00.339) 0:04:02.053 ***** 2026-01-28 00:33:19.550455 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:33:19.550466 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:33:19.550476 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:33:19.550487 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:33:19.550498 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:33:19.550509 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:33:19.550519 | orchestrator | ok: [testbed-manager] 2026-01-28 00:33:19.550530 | orchestrator | 2026-01-28 00:33:19.550541 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-01-28 00:33:19.550552 | orchestrator | Wednesday 28 January 2026 00:32:17 +0000 (0:00:04.984) 0:04:07.037 ***** 2026-01-28 00:33:19.550565 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:33:19.550579 | orchestrator | 2026-01-28 00:33:19.550590 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-01-28 00:33:19.550601 | orchestrator | Wednesday 28 January 2026 00:32:18 +0000 (0:00:00.431) 0:04:07.468 ***** 2026-01-28 00:33:19.550612 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-01-28 00:33:19.550623 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-01-28 00:33:19.550634 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:33:19.550645 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-01-28 00:33:19.550655 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-01-28 00:33:19.550666 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-01-28 00:33:19.550687 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:33:19.550704 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-01-28 00:33:19.550723 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-01-28 00:33:19.550736 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-01-28 00:33:19.550747 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:33:19.550758 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:33:19.550768 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-01-28 00:33:19.550779 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-01-28 00:33:19.550790 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-01-28 00:33:19.550801 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:33:19.550828 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-01-28 00:33:19.550840 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:33:19.550851 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-01-28 00:33:19.550861 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-01-28 00:33:19.550872 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:33:19.550883 | orchestrator | 2026-01-28 00:33:19.550894 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-01-28 00:33:19.550904 | orchestrator | Wednesday 28 January 2026 00:32:18 +0000 (0:00:00.354) 0:04:07.823 ***** 2026-01-28 00:33:19.550916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:33:19.550934 | orchestrator | 2026-01-28 00:33:19.550945 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-01-28 00:33:19.550956 | orchestrator | Wednesday 28 January 2026 00:32:19 +0000 (0:00:00.419) 0:04:08.242 ***** 2026-01-28 00:33:19.550967 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-01-28 00:33:19.550977 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-01-28 00:33:19.550988 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:33:19.550999 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-01-28 00:33:19.551010 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:33:19.551021 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-01-28 00:33:19.551031 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:33:19.551042 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-01-28 00:33:19.551053 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:33:19.551063 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-01-28 00:33:19.551074 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:33:19.551085 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:33:19.551096 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-01-28 00:33:19.551107 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:33:19.551118 | orchestrator | 2026-01-28 00:33:19.551128 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-01-28 00:33:19.551139 | orchestrator | Wednesday 28 January 2026 00:32:19 +0000 (0:00:00.348) 0:04:08.590 ***** 2026-01-28 00:33:19.551150 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:33:19.551161 | orchestrator | 2026-01-28 00:33:19.551172 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-01-28 00:33:19.551200 | orchestrator | Wednesday 28 January 2026 00:32:19 +0000 (0:00:00.451) 0:04:09.042 ***** 2026-01-28 00:33:19.551212 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:33:19.551223 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:33:19.551234 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:33:19.551245 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:33:19.551256 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:33:19.551267 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:33:19.551278 | orchestrator | changed: [testbed-manager] 2026-01-28 00:33:19.551288 | orchestrator | 2026-01-28 00:33:19.551299 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-01-28 00:33:19.551310 | orchestrator | Wednesday 28 January 2026 00:32:54 +0000 (0:00:35.067) 0:04:44.110 ***** 2026-01-28 00:33:19.551321 | orchestrator | changed: [testbed-manager] 2026-01-28 00:33:19.551332 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:33:19.551343 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:33:19.551358 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:33:19.551369 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:33:19.551380 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:33:19.551391 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:33:19.551401 | orchestrator | 2026-01-28 00:33:19.551412 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-01-28 00:33:19.551423 | orchestrator | Wednesday 28 January 2026 00:33:03 +0000 (0:00:08.092) 0:04:52.202 ***** 2026-01-28 00:33:19.551434 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:33:19.551445 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:33:19.551455 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:33:19.551466 | orchestrator | changed: [testbed-manager] 2026-01-28 00:33:19.551477 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:33:19.551488 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:33:19.551505 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:33:19.551516 | orchestrator | 2026-01-28 00:33:19.551526 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-01-28 00:33:19.551537 | orchestrator | Wednesday 28 January 2026 00:33:11 +0000 (0:00:08.226) 0:05:00.429 ***** 2026-01-28 00:33:19.551548 | orchestrator | ok: [testbed-manager] 2026-01-28 00:33:19.551559 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:33:19.551570 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:33:19.551581 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:33:19.551592 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:33:19.551603 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:33:19.551613 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:33:19.551624 | orchestrator | 2026-01-28 00:33:19.551635 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-01-28 00:33:19.551646 | orchestrator | Wednesday 28 January 2026 00:33:13 +0000 (0:00:01.940) 0:05:02.369 ***** 2026-01-28 00:33:19.551657 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:33:19.551668 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:33:19.551679 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:33:19.551690 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:33:19.551700 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:33:19.551712 | orchestrator | changed: [testbed-manager] 2026-01-28 00:33:19.551723 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:33:19.551733 | orchestrator | 2026-01-28 00:33:19.551751 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-01-28 00:33:32.552970 | orchestrator | Wednesday 28 January 2026 00:33:19 +0000 (0:00:06.298) 0:05:08.668 ***** 2026-01-28 00:33:32.553046 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:33:32.553054 | orchestrator | 2026-01-28 00:33:32.553059 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-01-28 00:33:32.553064 | orchestrator | Wednesday 28 January 2026 00:33:20 +0000 (0:00:00.664) 0:05:09.333 ***** 2026-01-28 00:33:32.553068 | orchestrator | changed: [testbed-manager] 2026-01-28 00:33:32.553074 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:33:32.553078 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:33:32.553082 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:33:32.553086 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:33:32.553090 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:33:32.553094 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:33:32.553097 | orchestrator | 2026-01-28 00:33:32.553101 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-01-28 00:33:32.553105 | orchestrator | Wednesday 28 January 2026 00:33:20 +0000 (0:00:00.760) 0:05:10.093 ***** 2026-01-28 00:33:32.553109 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:33:32.553114 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:33:32.553118 | orchestrator | ok: [testbed-manager] 2026-01-28 00:33:32.553122 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:33:32.553126 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:33:32.553129 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:33:32.553133 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:33:32.553137 | orchestrator | 2026-01-28 00:33:32.553141 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-01-28 00:33:32.553145 | orchestrator | Wednesday 28 January 2026 00:33:22 +0000 (0:00:01.877) 0:05:11.971 ***** 2026-01-28 00:33:32.553149 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:33:32.553153 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:33:32.553158 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:33:32.553162 | orchestrator | changed: [testbed-manager] 2026-01-28 00:33:32.553166 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:33:32.553170 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:33:32.553173 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:33:32.553243 | orchestrator | 2026-01-28 00:33:32.553251 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-01-28 00:33:32.553258 | orchestrator | Wednesday 28 January 2026 00:33:24 +0000 (0:00:01.796) 0:05:13.767 ***** 2026-01-28 00:33:32.553264 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:33:32.553270 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:33:32.553276 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:33:32.553283 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:33:32.553289 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:33:32.553296 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:33:32.553302 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:33:32.553308 | orchestrator | 2026-01-28 00:33:32.553314 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-01-28 00:33:32.553320 | orchestrator | Wednesday 28 January 2026 00:33:24 +0000 (0:00:00.313) 0:05:14.080 ***** 2026-01-28 00:33:32.553327 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:33:32.553333 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:33:32.553340 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:33:32.553347 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:33:32.553353 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:33:32.553359 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:33:32.553366 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:33:32.553372 | orchestrator | 2026-01-28 00:33:32.553378 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-01-28 00:33:32.553384 | orchestrator | Wednesday 28 January 2026 00:33:25 +0000 (0:00:00.387) 0:05:14.468 ***** 2026-01-28 00:33:32.553391 | orchestrator | ok: [testbed-manager] 2026-01-28 00:33:32.553397 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:33:32.553417 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:33:32.553425 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:33:32.553432 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:33:32.553438 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:33:32.553444 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:33:32.553450 | orchestrator | 2026-01-28 00:33:32.553455 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-01-28 00:33:32.553459 | orchestrator | Wednesday 28 January 2026 00:33:25 +0000 (0:00:00.359) 0:05:14.828 ***** 2026-01-28 00:33:32.553463 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:33:32.553467 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:33:32.553471 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:33:32.553475 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:33:32.553478 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:33:32.553482 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:33:32.553486 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:33:32.553490 | orchestrator | 2026-01-28 00:33:32.553494 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-01-28 00:33:32.553508 | orchestrator | Wednesday 28 January 2026 00:33:25 +0000 (0:00:00.306) 0:05:15.134 ***** 2026-01-28 00:33:32.553512 | orchestrator | ok: [testbed-manager] 2026-01-28 00:33:32.553516 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:33:32.553520 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:33:32.553524 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:33:32.553534 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:33:32.553538 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:33:32.553543 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:33:32.553548 | orchestrator | 2026-01-28 00:33:32.553552 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-01-28 00:33:32.553557 | orchestrator | Wednesday 28 January 2026 00:33:26 +0000 (0:00:00.335) 0:05:15.470 ***** 2026-01-28 00:33:32.553561 | orchestrator | ok: [testbed-manager] =>  2026-01-28 00:33:32.553566 | orchestrator |  docker_version: 5:27.5.1 2026-01-28 00:33:32.553570 | orchestrator | ok: [testbed-node-3] =>  2026-01-28 00:33:32.553575 | orchestrator |  docker_version: 5:27.5.1 2026-01-28 00:33:32.553579 | orchestrator | ok: [testbed-node-4] =>  2026-01-28 00:33:32.553590 | orchestrator |  docker_version: 5:27.5.1 2026-01-28 00:33:32.553594 | orchestrator | ok: [testbed-node-5] =>  2026-01-28 00:33:32.553599 | orchestrator |  docker_version: 5:27.5.1 2026-01-28 00:33:32.553617 | orchestrator | ok: [testbed-node-0] =>  2026-01-28 00:33:32.553624 | orchestrator |  docker_version: 5:27.5.1 2026-01-28 00:33:32.553631 | orchestrator | ok: [testbed-node-1] =>  2026-01-28 00:33:32.553638 | orchestrator |  docker_version: 5:27.5.1 2026-01-28 00:33:32.553644 | orchestrator | ok: [testbed-node-2] =>  2026-01-28 00:33:32.553651 | orchestrator |  docker_version: 5:27.5.1 2026-01-28 00:33:32.553658 | orchestrator | 2026-01-28 00:33:32.553665 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-01-28 00:33:32.553672 | orchestrator | Wednesday 28 January 2026 00:33:26 +0000 (0:00:00.317) 0:05:15.787 ***** 2026-01-28 00:33:32.553679 | orchestrator | ok: [testbed-manager] =>  2026-01-28 00:33:32.553685 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-28 00:33:32.553693 | orchestrator | ok: [testbed-node-3] =>  2026-01-28 00:33:32.553698 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-28 00:33:32.553702 | orchestrator | ok: [testbed-node-4] =>  2026-01-28 00:33:32.553707 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-28 00:33:32.553711 | orchestrator | ok: [testbed-node-5] =>  2026-01-28 00:33:32.553716 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-28 00:33:32.553720 | orchestrator | ok: [testbed-node-0] =>  2026-01-28 00:33:32.553725 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-28 00:33:32.553729 | orchestrator | ok: [testbed-node-1] =>  2026-01-28 00:33:32.553734 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-28 00:33:32.553738 | orchestrator | ok: [testbed-node-2] =>  2026-01-28 00:33:32.553743 | orchestrator |  docker_cli_version: 5:27.5.1 2026-01-28 00:33:32.553747 | orchestrator | 2026-01-28 00:33:32.553752 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-01-28 00:33:32.553756 | orchestrator | Wednesday 28 January 2026 00:33:26 +0000 (0:00:00.343) 0:05:16.130 ***** 2026-01-28 00:33:32.553761 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:33:32.553765 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:33:32.553769 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:33:32.553774 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:33:32.553778 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:33:32.553782 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:33:32.553787 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:33:32.553791 | orchestrator | 2026-01-28 00:33:32.553796 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-01-28 00:33:32.553801 | orchestrator | Wednesday 28 January 2026 00:33:27 +0000 (0:00:00.274) 0:05:16.405 ***** 2026-01-28 00:33:32.553805 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:33:32.553809 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:33:32.553814 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:33:32.553818 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:33:32.553822 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:33:32.553827 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:33:32.553831 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:33:32.553835 | orchestrator | 2026-01-28 00:33:32.553840 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-01-28 00:33:32.553844 | orchestrator | Wednesday 28 January 2026 00:33:27 +0000 (0:00:00.337) 0:05:16.742 ***** 2026-01-28 00:33:32.553850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:33:32.553856 | orchestrator | 2026-01-28 00:33:32.553861 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-01-28 00:33:32.553865 | orchestrator | Wednesday 28 January 2026 00:33:28 +0000 (0:00:00.453) 0:05:17.196 ***** 2026-01-28 00:33:32.553870 | orchestrator | ok: [testbed-manager] 2026-01-28 00:33:32.553878 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:33:32.553883 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:33:32.553889 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:33:32.553896 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:33:32.553903 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:33:32.553910 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:33:32.553916 | orchestrator | 2026-01-28 00:33:32.553926 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-01-28 00:33:32.553932 | orchestrator | Wednesday 28 January 2026 00:33:29 +0000 (0:00:01.052) 0:05:18.248 ***** 2026-01-28 00:33:32.553938 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:33:32.553945 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:33:32.553950 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:33:32.553956 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:33:32.553962 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:33:32.553968 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:33:32.553975 | orchestrator | ok: [testbed-manager] 2026-01-28 00:33:32.553981 | orchestrator | 2026-01-28 00:33:32.553988 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-01-28 00:33:32.553996 | orchestrator | Wednesday 28 January 2026 00:33:32 +0000 (0:00:03.012) 0:05:21.261 ***** 2026-01-28 00:33:32.554002 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-01-28 00:33:32.554009 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-01-28 00:33:32.554048 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-01-28 00:33:32.554052 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-01-28 00:33:32.554056 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-01-28 00:33:32.554060 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-01-28 00:33:32.554064 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:33:32.554068 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-01-28 00:33:32.554072 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-01-28 00:33:32.554076 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-01-28 00:33:32.554080 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:33:32.554084 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-01-28 00:33:32.554088 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-01-28 00:33:32.554092 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-01-28 00:33:32.554095 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:33:32.554100 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-01-28 00:33:32.554109 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-01-28 00:34:36.029481 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-01-28 00:34:36.029570 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:34:36.029581 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-01-28 00:34:36.029589 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-01-28 00:34:36.029595 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-01-28 00:34:36.029602 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:34:36.029608 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:34:36.029615 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-01-28 00:34:36.029621 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-01-28 00:34:36.029627 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-01-28 00:34:36.029634 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:34:36.029640 | orchestrator | 2026-01-28 00:34:36.029648 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-01-28 00:34:36.029655 | orchestrator | Wednesday 28 January 2026 00:33:32 +0000 (0:00:00.655) 0:05:21.916 ***** 2026-01-28 00:34:36.029662 | orchestrator | ok: [testbed-manager] 2026-01-28 00:34:36.029669 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:34:36.029675 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:34:36.029702 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:34:36.029709 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:34:36.029715 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:34:36.029722 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:34:36.029728 | orchestrator | 2026-01-28 00:34:36.029734 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-01-28 00:34:36.029741 | orchestrator | Wednesday 28 January 2026 00:33:39 +0000 (0:00:06.962) 0:05:28.879 ***** 2026-01-28 00:34:36.029747 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:34:36.029753 | orchestrator | ok: [testbed-manager] 2026-01-28 00:34:36.029759 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:34:36.029766 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:34:36.029772 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:34:36.029778 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:34:36.029784 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:34:36.029790 | orchestrator | 2026-01-28 00:34:36.029797 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-01-28 00:34:36.029803 | orchestrator | Wednesday 28 January 2026 00:33:40 +0000 (0:00:01.140) 0:05:30.020 ***** 2026-01-28 00:34:36.029809 | orchestrator | ok: [testbed-manager] 2026-01-28 00:34:36.029816 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:34:36.029822 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:34:36.029828 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:34:36.029834 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:34:36.029841 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:34:36.029847 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:34:36.029853 | orchestrator | 2026-01-28 00:34:36.029859 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-01-28 00:34:36.029866 | orchestrator | Wednesday 28 January 2026 00:33:49 +0000 (0:00:08.465) 0:05:38.485 ***** 2026-01-28 00:34:36.029872 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:34:36.029879 | orchestrator | changed: [testbed-manager] 2026-01-28 00:34:36.029890 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:34:36.029900 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:34:36.029911 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:34:36.029922 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:34:36.029932 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:34:36.029941 | orchestrator | 2026-01-28 00:34:36.029950 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-01-28 00:34:36.029959 | orchestrator | Wednesday 28 January 2026 00:33:52 +0000 (0:00:03.575) 0:05:42.060 ***** 2026-01-28 00:34:36.029968 | orchestrator | ok: [testbed-manager] 2026-01-28 00:34:36.029979 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:34:36.029987 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:34:36.029996 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:34:36.030007 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:34:36.030075 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:34:36.030089 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:34:36.030101 | orchestrator | 2026-01-28 00:34:36.030112 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-01-28 00:34:36.030123 | orchestrator | Wednesday 28 January 2026 00:33:54 +0000 (0:00:01.538) 0:05:43.598 ***** 2026-01-28 00:34:36.030134 | orchestrator | ok: [testbed-manager] 2026-01-28 00:34:36.030145 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:34:36.030195 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:34:36.030209 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:34:36.030219 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:34:36.030230 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:34:36.030241 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:34:36.030250 | orchestrator | 2026-01-28 00:34:36.030261 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-01-28 00:34:36.030271 | orchestrator | Wednesday 28 January 2026 00:33:56 +0000 (0:00:01.589) 0:05:45.188 ***** 2026-01-28 00:34:36.030282 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:34:36.030304 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:34:36.030316 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:34:36.030326 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:34:36.030336 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:34:36.030347 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:34:36.030358 | orchestrator | changed: [testbed-manager] 2026-01-28 00:34:36.030369 | orchestrator | 2026-01-28 00:34:36.030379 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-01-28 00:34:36.030389 | orchestrator | Wednesday 28 January 2026 00:33:56 +0000 (0:00:00.666) 0:05:45.854 ***** 2026-01-28 00:34:36.030399 | orchestrator | ok: [testbed-manager] 2026-01-28 00:34:36.030409 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:34:36.030420 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:34:36.030430 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:34:36.030438 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:34:36.030447 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:34:36.030456 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:34:36.030466 | orchestrator | 2026-01-28 00:34:36.030477 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-01-28 00:34:36.030508 | orchestrator | Wednesday 28 January 2026 00:34:06 +0000 (0:00:09.827) 0:05:55.681 ***** 2026-01-28 00:34:36.030519 | orchestrator | changed: [testbed-manager] 2026-01-28 00:34:36.030530 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:34:36.030539 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:34:36.030550 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:34:36.030561 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:34:36.030571 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:34:36.030582 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:34:36.030592 | orchestrator | 2026-01-28 00:34:36.030603 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-01-28 00:34:36.030614 | orchestrator | Wednesday 28 January 2026 00:34:07 +0000 (0:00:00.932) 0:05:56.613 ***** 2026-01-28 00:34:36.030625 | orchestrator | ok: [testbed-manager] 2026-01-28 00:34:36.030636 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:34:36.030646 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:34:36.030657 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:34:36.030679 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:34:36.030690 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:34:36.030701 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:34:36.030712 | orchestrator | 2026-01-28 00:34:36.030723 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-01-28 00:34:36.030733 | orchestrator | Wednesday 28 January 2026 00:34:17 +0000 (0:00:09.831) 0:06:06.445 ***** 2026-01-28 00:34:36.030740 | orchestrator | ok: [testbed-manager] 2026-01-28 00:34:36.030746 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:34:36.030752 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:34:36.030758 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:34:36.030765 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:34:36.030771 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:34:36.030777 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:34:36.030783 | orchestrator | 2026-01-28 00:34:36.030789 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-01-28 00:34:36.030795 | orchestrator | Wednesday 28 January 2026 00:34:28 +0000 (0:00:11.433) 0:06:17.879 ***** 2026-01-28 00:34:36.030801 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-01-28 00:34:36.030808 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-01-28 00:34:36.030814 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-01-28 00:34:36.030820 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-01-28 00:34:36.030826 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-01-28 00:34:36.030832 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-01-28 00:34:36.030838 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-01-28 00:34:36.030851 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-01-28 00:34:36.030857 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-01-28 00:34:36.030863 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-01-28 00:34:36.030869 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-01-28 00:34:36.030876 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-01-28 00:34:36.030919 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-01-28 00:34:36.030926 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-01-28 00:34:36.030932 | orchestrator | 2026-01-28 00:34:36.030938 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-01-28 00:34:36.030944 | orchestrator | Wednesday 28 January 2026 00:34:29 +0000 (0:00:01.252) 0:06:19.131 ***** 2026-01-28 00:34:36.030951 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:34:36.030957 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:34:36.030963 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:34:36.030969 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:34:36.030975 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:34:36.030982 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:34:36.030988 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:34:36.030994 | orchestrator | 2026-01-28 00:34:36.031000 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-01-28 00:34:36.031009 | orchestrator | Wednesday 28 January 2026 00:34:30 +0000 (0:00:00.570) 0:06:19.702 ***** 2026-01-28 00:34:36.031016 | orchestrator | ok: [testbed-manager] 2026-01-28 00:34:36.031022 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:34:36.031028 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:34:36.031035 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:34:36.031041 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:34:36.031047 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:34:36.031053 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:34:36.031059 | orchestrator | 2026-01-28 00:34:36.031066 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-01-28 00:34:36.031073 | orchestrator | Wednesday 28 January 2026 00:34:34 +0000 (0:00:04.434) 0:06:24.136 ***** 2026-01-28 00:34:36.031079 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:34:36.031085 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:34:36.031092 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:34:36.031098 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:34:36.031104 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:34:36.031110 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:34:36.031116 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:34:36.031122 | orchestrator | 2026-01-28 00:34:36.031129 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-01-28 00:34:36.031136 | orchestrator | Wednesday 28 January 2026 00:34:35 +0000 (0:00:00.515) 0:06:24.651 ***** 2026-01-28 00:34:36.031142 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-01-28 00:34:36.031149 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-01-28 00:34:36.031177 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:34:36.031185 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-01-28 00:34:36.031191 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-01-28 00:34:36.031197 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:34:36.031203 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-01-28 00:34:36.031209 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-01-28 00:34:36.031216 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:34:36.031229 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-01-28 00:34:55.517298 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-01-28 00:34:55.517421 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:34:55.517473 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-01-28 00:34:55.517489 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-01-28 00:34:55.517502 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:34:55.517511 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-01-28 00:34:55.517519 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-01-28 00:34:55.517526 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:34:55.517533 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-01-28 00:34:55.517541 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-01-28 00:34:55.517548 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:34:55.517555 | orchestrator | 2026-01-28 00:34:55.517565 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-01-28 00:34:55.517573 | orchestrator | Wednesday 28 January 2026 00:34:36 +0000 (0:00:00.781) 0:06:25.432 ***** 2026-01-28 00:34:55.517581 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:34:55.517588 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:34:55.517595 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:34:55.517602 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:34:55.517610 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:34:55.517617 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:34:55.517624 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:34:55.517631 | orchestrator | 2026-01-28 00:34:55.517638 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-01-28 00:34:55.517646 | orchestrator | Wednesday 28 January 2026 00:34:36 +0000 (0:00:00.513) 0:06:25.946 ***** 2026-01-28 00:34:55.517653 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:34:55.517660 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:34:55.517667 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:34:55.517674 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:34:55.517681 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:34:55.517688 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:34:55.517695 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:34:55.517703 | orchestrator | 2026-01-28 00:34:55.517710 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-01-28 00:34:55.517717 | orchestrator | Wednesday 28 January 2026 00:34:37 +0000 (0:00:00.491) 0:06:26.437 ***** 2026-01-28 00:34:55.517724 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:34:55.517731 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:34:55.517738 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:34:55.517746 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:34:55.517752 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:34:55.517760 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:34:55.517767 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:34:55.517774 | orchestrator | 2026-01-28 00:34:55.517781 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-01-28 00:34:55.517788 | orchestrator | Wednesday 28 January 2026 00:34:37 +0000 (0:00:00.553) 0:06:26.991 ***** 2026-01-28 00:34:55.517795 | orchestrator | ok: [testbed-manager] 2026-01-28 00:34:55.517803 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:34:55.517810 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:34:55.517817 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:34:55.517824 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:34:55.517831 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:34:55.517839 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:34:55.517847 | orchestrator | 2026-01-28 00:34:55.517855 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-01-28 00:34:55.517864 | orchestrator | Wednesday 28 January 2026 00:34:39 +0000 (0:00:01.974) 0:06:28.965 ***** 2026-01-28 00:34:55.517874 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:34:55.517895 | orchestrator | 2026-01-28 00:34:55.517905 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-01-28 00:34:55.517913 | orchestrator | Wednesday 28 January 2026 00:34:40 +0000 (0:00:00.893) 0:06:29.859 ***** 2026-01-28 00:34:55.517921 | orchestrator | ok: [testbed-manager] 2026-01-28 00:34:55.517930 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:34:55.517938 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:34:55.517947 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:34:55.517955 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:34:55.517963 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:34:55.517971 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:34:55.517978 | orchestrator | 2026-01-28 00:34:55.517985 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-01-28 00:34:55.517992 | orchestrator | Wednesday 28 January 2026 00:34:41 +0000 (0:00:00.847) 0:06:30.706 ***** 2026-01-28 00:34:55.517999 | orchestrator | ok: [testbed-manager] 2026-01-28 00:34:55.518006 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:34:55.518063 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:34:55.518073 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:34:55.518080 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:34:55.518088 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:34:55.518095 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:34:55.518102 | orchestrator | 2026-01-28 00:34:55.518109 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-01-28 00:34:55.518158 | orchestrator | Wednesday 28 January 2026 00:34:42 +0000 (0:00:00.946) 0:06:31.653 ***** 2026-01-28 00:34:55.518170 | orchestrator | ok: [testbed-manager] 2026-01-28 00:34:55.518177 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:34:55.518185 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:34:55.518192 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:34:55.518199 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:34:55.518206 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:34:55.518213 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:34:55.518220 | orchestrator | 2026-01-28 00:34:55.518228 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-01-28 00:34:55.518251 | orchestrator | Wednesday 28 January 2026 00:34:44 +0000 (0:00:01.610) 0:06:33.264 ***** 2026-01-28 00:34:55.518259 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:34:55.518266 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:34:55.518273 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:34:55.518281 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:34:55.518288 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:34:55.518295 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:34:55.518302 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:34:55.518309 | orchestrator | 2026-01-28 00:34:55.518317 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-01-28 00:34:55.518324 | orchestrator | Wednesday 28 January 2026 00:34:45 +0000 (0:00:01.451) 0:06:34.716 ***** 2026-01-28 00:34:55.518331 | orchestrator | ok: [testbed-manager] 2026-01-28 00:34:55.518338 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:34:55.518345 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:34:55.518353 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:34:55.518360 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:34:55.518367 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:34:55.518374 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:34:55.518381 | orchestrator | 2026-01-28 00:34:55.518388 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-01-28 00:34:55.518395 | orchestrator | Wednesday 28 January 2026 00:34:46 +0000 (0:00:01.369) 0:06:36.086 ***** 2026-01-28 00:34:55.518402 | orchestrator | changed: [testbed-manager] 2026-01-28 00:34:55.518411 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:34:55.518424 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:34:55.518436 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:34:55.518456 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:34:55.518468 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:34:55.518479 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:34:55.518553 | orchestrator | 2026-01-28 00:34:55.518568 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-01-28 00:34:55.518580 | orchestrator | Wednesday 28 January 2026 00:34:48 +0000 (0:00:01.415) 0:06:37.501 ***** 2026-01-28 00:34:55.518607 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:34:55.518631 | orchestrator | 2026-01-28 00:34:55.518644 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-01-28 00:34:55.518657 | orchestrator | Wednesday 28 January 2026 00:34:49 +0000 (0:00:01.088) 0:06:38.590 ***** 2026-01-28 00:34:55.518669 | orchestrator | ok: [testbed-manager] 2026-01-28 00:34:55.518681 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:34:55.518694 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:34:55.518706 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:34:55.518718 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:34:55.518731 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:34:55.518743 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:34:55.518756 | orchestrator | 2026-01-28 00:34:55.518768 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-01-28 00:34:55.518780 | orchestrator | Wednesday 28 January 2026 00:34:50 +0000 (0:00:01.403) 0:06:39.993 ***** 2026-01-28 00:34:55.518793 | orchestrator | ok: [testbed-manager] 2026-01-28 00:34:55.518805 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:34:55.518817 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:34:55.518830 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:34:55.518842 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:34:55.518854 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:34:55.518866 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:34:55.518879 | orchestrator | 2026-01-28 00:34:55.518891 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-01-28 00:34:55.518904 | orchestrator | Wednesday 28 January 2026 00:34:52 +0000 (0:00:01.164) 0:06:41.158 ***** 2026-01-28 00:34:55.518917 | orchestrator | ok: [testbed-manager] 2026-01-28 00:34:55.518929 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:34:55.518941 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:34:55.518954 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:34:55.518966 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:34:55.518994 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:34:55.519007 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:34:55.519020 | orchestrator | 2026-01-28 00:34:55.519028 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-01-28 00:34:55.519035 | orchestrator | Wednesday 28 January 2026 00:34:53 +0000 (0:00:01.230) 0:06:42.389 ***** 2026-01-28 00:34:55.519042 | orchestrator | ok: [testbed-manager] 2026-01-28 00:34:55.519053 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:34:55.519065 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:34:55.519077 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:34:55.519089 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:34:55.519100 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:34:55.519113 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:34:55.519159 | orchestrator | 2026-01-28 00:34:55.519172 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-01-28 00:34:55.519184 | orchestrator | Wednesday 28 January 2026 00:34:54 +0000 (0:00:01.217) 0:06:43.607 ***** 2026-01-28 00:34:55.519196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:34:55.519209 | orchestrator | 2026-01-28 00:34:55.519222 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-28 00:34:55.519234 | orchestrator | Wednesday 28 January 2026 00:34:55 +0000 (0:00:00.761) 0:06:44.368 ***** 2026-01-28 00:34:55.519253 | orchestrator | 2026-01-28 00:34:55.519266 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-28 00:34:55.519279 | orchestrator | Wednesday 28 January 2026 00:34:55 +0000 (0:00:00.036) 0:06:44.405 ***** 2026-01-28 00:34:55.519291 | orchestrator | 2026-01-28 00:34:55.519304 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-28 00:34:55.519316 | orchestrator | Wednesday 28 January 2026 00:34:55 +0000 (0:00:00.035) 0:06:44.441 ***** 2026-01-28 00:34:55.519328 | orchestrator | 2026-01-28 00:34:55.519340 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-28 00:34:55.519363 | orchestrator | Wednesday 28 January 2026 00:34:55 +0000 (0:00:00.041) 0:06:44.482 ***** 2026-01-28 00:35:22.633593 | orchestrator | 2026-01-28 00:35:22.633709 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-28 00:35:22.633726 | orchestrator | Wednesday 28 January 2026 00:34:55 +0000 (0:00:00.036) 0:06:44.519 ***** 2026-01-28 00:35:22.633737 | orchestrator | 2026-01-28 00:35:22.633749 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-28 00:35:22.633760 | orchestrator | Wednesday 28 January 2026 00:34:55 +0000 (0:00:00.036) 0:06:44.555 ***** 2026-01-28 00:35:22.633771 | orchestrator | 2026-01-28 00:35:22.633782 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-01-28 00:35:22.633793 | orchestrator | Wednesday 28 January 2026 00:34:55 +0000 (0:00:00.042) 0:06:44.598 ***** 2026-01-28 00:35:22.633804 | orchestrator | 2026-01-28 00:35:22.633815 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-01-28 00:35:22.633826 | orchestrator | Wednesday 28 January 2026 00:34:55 +0000 (0:00:00.036) 0:06:44.635 ***** 2026-01-28 00:35:22.633838 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:35:22.633851 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:35:22.633862 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:35:22.633873 | orchestrator | 2026-01-28 00:35:22.633884 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-01-28 00:35:22.633895 | orchestrator | Wednesday 28 January 2026 00:34:56 +0000 (0:00:01.147) 0:06:45.783 ***** 2026-01-28 00:35:22.633906 | orchestrator | changed: [testbed-manager] 2026-01-28 00:35:22.633919 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:35:22.633930 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:35:22.633941 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:35:22.633952 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:35:22.633963 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:35:22.633974 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:35:22.633985 | orchestrator | 2026-01-28 00:35:22.633996 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-01-28 00:35:22.634007 | orchestrator | Wednesday 28 January 2026 00:34:58 +0000 (0:00:01.499) 0:06:47.282 ***** 2026-01-28 00:35:22.634113 | orchestrator | changed: [testbed-manager] 2026-01-28 00:35:22.634129 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:35:22.634140 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:35:22.634151 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:35:22.634162 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:35:22.634173 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:35:22.634184 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:35:22.634195 | orchestrator | 2026-01-28 00:35:22.634205 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-01-28 00:35:22.634216 | orchestrator | Wednesday 28 January 2026 00:34:59 +0000 (0:00:01.136) 0:06:48.419 ***** 2026-01-28 00:35:22.634227 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:35:22.634238 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:35:22.634249 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:35:22.634260 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:35:22.634271 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:35:22.634282 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:35:22.634316 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:35:22.634328 | orchestrator | 2026-01-28 00:35:22.634339 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-01-28 00:35:22.634350 | orchestrator | Wednesday 28 January 2026 00:35:01 +0000 (0:00:02.473) 0:06:50.892 ***** 2026-01-28 00:35:22.634362 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:35:22.634372 | orchestrator | 2026-01-28 00:35:22.634383 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-01-28 00:35:22.634394 | orchestrator | Wednesday 28 January 2026 00:35:01 +0000 (0:00:00.160) 0:06:51.053 ***** 2026-01-28 00:35:22.634405 | orchestrator | ok: [testbed-manager] 2026-01-28 00:35:22.634416 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:35:22.634427 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:35:22.634437 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:35:22.634448 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:35:22.634459 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:35:22.634487 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:35:22.634498 | orchestrator | 2026-01-28 00:35:22.634509 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-01-28 00:35:22.634522 | orchestrator | Wednesday 28 January 2026 00:35:03 +0000 (0:00:01.801) 0:06:52.854 ***** 2026-01-28 00:35:22.634533 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:35:22.634543 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:35:22.634554 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:35:22.634565 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:35:22.634576 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:35:22.634587 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:35:22.634597 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:35:22.634608 | orchestrator | 2026-01-28 00:35:22.634619 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-01-28 00:35:22.634630 | orchestrator | Wednesday 28 January 2026 00:35:04 +0000 (0:00:00.618) 0:06:53.473 ***** 2026-01-28 00:35:22.634641 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:35:22.634655 | orchestrator | 2026-01-28 00:35:22.634666 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-01-28 00:35:22.634677 | orchestrator | Wednesday 28 January 2026 00:35:05 +0000 (0:00:01.092) 0:06:54.565 ***** 2026-01-28 00:35:22.634687 | orchestrator | ok: [testbed-manager] 2026-01-28 00:35:22.634698 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:35:22.634709 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:35:22.634720 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:35:22.634731 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:35:22.634742 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:35:22.634753 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:35:22.634764 | orchestrator | 2026-01-28 00:35:22.634775 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-01-28 00:35:22.634786 | orchestrator | Wednesday 28 January 2026 00:35:06 +0000 (0:00:00.888) 0:06:55.454 ***** 2026-01-28 00:35:22.634797 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-01-28 00:35:22.634825 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-01-28 00:35:22.634837 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-01-28 00:35:22.634848 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-01-28 00:35:22.634859 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-01-28 00:35:22.634869 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-01-28 00:35:22.634880 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-01-28 00:35:22.634891 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-01-28 00:35:22.634902 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-01-28 00:35:22.634921 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-01-28 00:35:22.634932 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-01-28 00:35:22.634942 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-01-28 00:35:22.634953 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-01-28 00:35:22.634964 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-01-28 00:35:22.634975 | orchestrator | 2026-01-28 00:35:22.634986 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-01-28 00:35:22.634996 | orchestrator | Wednesday 28 January 2026 00:35:08 +0000 (0:00:02.609) 0:06:58.064 ***** 2026-01-28 00:35:22.635007 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:35:22.635018 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:35:22.635029 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:35:22.635040 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:35:22.635050 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:35:22.635061 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:35:22.635114 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:35:22.635126 | orchestrator | 2026-01-28 00:35:22.635137 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-01-28 00:35:22.635148 | orchestrator | Wednesday 28 January 2026 00:35:09 +0000 (0:00:00.742) 0:06:58.806 ***** 2026-01-28 00:35:22.635161 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:35:22.635173 | orchestrator | 2026-01-28 00:35:22.635184 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-01-28 00:35:22.635195 | orchestrator | Wednesday 28 January 2026 00:35:10 +0000 (0:00:00.808) 0:06:59.615 ***** 2026-01-28 00:35:22.635205 | orchestrator | ok: [testbed-manager] 2026-01-28 00:35:22.635216 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:35:22.635227 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:35:22.635238 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:35:22.635249 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:35:22.635259 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:35:22.635270 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:35:22.635281 | orchestrator | 2026-01-28 00:35:22.635291 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-01-28 00:35:22.635302 | orchestrator | Wednesday 28 January 2026 00:35:11 +0000 (0:00:00.859) 0:07:00.474 ***** 2026-01-28 00:35:22.635313 | orchestrator | ok: [testbed-manager] 2026-01-28 00:35:22.635324 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:35:22.635334 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:35:22.635345 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:35:22.635356 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:35:22.635367 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:35:22.635377 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:35:22.635388 | orchestrator | 2026-01-28 00:35:22.635399 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-01-28 00:35:22.635410 | orchestrator | Wednesday 28 January 2026 00:35:12 +0000 (0:00:01.042) 0:07:01.517 ***** 2026-01-28 00:35:22.635427 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:35:22.635438 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:35:22.635449 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:35:22.635460 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:35:22.635471 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:35:22.635482 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:35:22.635492 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:35:22.635503 | orchestrator | 2026-01-28 00:35:22.635514 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-01-28 00:35:22.635525 | orchestrator | Wednesday 28 January 2026 00:35:12 +0000 (0:00:00.567) 0:07:02.084 ***** 2026-01-28 00:35:22.635535 | orchestrator | ok: [testbed-manager] 2026-01-28 00:35:22.635553 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:35:22.635564 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:35:22.635575 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:35:22.635585 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:35:22.635596 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:35:22.635607 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:35:22.635617 | orchestrator | 2026-01-28 00:35:22.635628 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-01-28 00:35:22.635639 | orchestrator | Wednesday 28 January 2026 00:35:14 +0000 (0:00:01.512) 0:07:03.597 ***** 2026-01-28 00:35:22.635650 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:35:22.635661 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:35:22.635672 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:35:22.635682 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:35:22.635693 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:35:22.635704 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:35:22.635715 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:35:22.635726 | orchestrator | 2026-01-28 00:35:22.635737 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-01-28 00:35:22.635748 | orchestrator | Wednesday 28 January 2026 00:35:14 +0000 (0:00:00.522) 0:07:04.119 ***** 2026-01-28 00:35:22.635759 | orchestrator | ok: [testbed-manager] 2026-01-28 00:35:22.635770 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:35:22.635780 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:35:22.635791 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:35:22.635802 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:35:22.635813 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:35:22.635831 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:35:54.949293 | orchestrator | 2026-01-28 00:35:54.949404 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-01-28 00:35:54.949422 | orchestrator | Wednesday 28 January 2026 00:35:22 +0000 (0:00:07.637) 0:07:11.757 ***** 2026-01-28 00:35:54.949435 | orchestrator | ok: [testbed-manager] 2026-01-28 00:35:54.949450 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:35:54.949462 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:35:54.949473 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:35:54.949484 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:35:54.949496 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:35:54.949507 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:35:54.949518 | orchestrator | 2026-01-28 00:35:54.949529 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-01-28 00:35:54.949541 | orchestrator | Wednesday 28 January 2026 00:35:24 +0000 (0:00:01.758) 0:07:13.515 ***** 2026-01-28 00:35:54.949552 | orchestrator | ok: [testbed-manager] 2026-01-28 00:35:54.949563 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:35:54.949574 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:35:54.949585 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:35:54.949596 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:35:54.949607 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:35:54.949618 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:35:54.949629 | orchestrator | 2026-01-28 00:35:54.949640 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-01-28 00:35:54.949651 | orchestrator | Wednesday 28 January 2026 00:35:26 +0000 (0:00:01.954) 0:07:15.469 ***** 2026-01-28 00:35:54.949662 | orchestrator | ok: [testbed-manager] 2026-01-28 00:35:54.949673 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:35:54.949684 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:35:54.949695 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:35:54.949706 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:35:54.949717 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:35:54.949728 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:35:54.949739 | orchestrator | 2026-01-28 00:35:54.949750 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-28 00:35:54.949761 | orchestrator | Wednesday 28 January 2026 00:35:28 +0000 (0:00:01.676) 0:07:17.146 ***** 2026-01-28 00:35:54.949799 | orchestrator | ok: [testbed-manager] 2026-01-28 00:35:54.949811 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:35:54.949824 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:35:54.949836 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:35:54.949849 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:35:54.949861 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:35:54.949874 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:35:54.949886 | orchestrator | 2026-01-28 00:35:54.949899 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-28 00:35:54.949912 | orchestrator | Wednesday 28 January 2026 00:35:28 +0000 (0:00:00.875) 0:07:18.021 ***** 2026-01-28 00:35:54.949925 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:35:54.949937 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:35:54.949950 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:35:54.949962 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:35:54.949974 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:35:54.949987 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:35:54.949999 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:35:54.950011 | orchestrator | 2026-01-28 00:35:54.950216 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-01-28 00:35:54.950231 | orchestrator | Wednesday 28 January 2026 00:35:29 +0000 (0:00:01.036) 0:07:19.058 ***** 2026-01-28 00:35:54.950242 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:35:54.950253 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:35:54.950264 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:35:54.950275 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:35:54.950285 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:35:54.950296 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:35:54.950307 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:35:54.950318 | orchestrator | 2026-01-28 00:35:54.950328 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-01-28 00:35:54.950339 | orchestrator | Wednesday 28 January 2026 00:35:30 +0000 (0:00:00.538) 0:07:19.596 ***** 2026-01-28 00:35:54.950350 | orchestrator | ok: [testbed-manager] 2026-01-28 00:35:54.950361 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:35:54.950372 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:35:54.950382 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:35:54.950410 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:35:54.950422 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:35:54.950433 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:35:54.950444 | orchestrator | 2026-01-28 00:35:54.950455 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-01-28 00:35:54.950465 | orchestrator | Wednesday 28 January 2026 00:35:30 +0000 (0:00:00.531) 0:07:20.128 ***** 2026-01-28 00:35:54.950477 | orchestrator | ok: [testbed-manager] 2026-01-28 00:35:54.950488 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:35:54.950499 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:35:54.950509 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:35:54.950520 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:35:54.950531 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:35:54.950542 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:35:54.950553 | orchestrator | 2026-01-28 00:35:54.950564 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-01-28 00:35:54.950574 | orchestrator | Wednesday 28 January 2026 00:35:31 +0000 (0:00:00.581) 0:07:20.709 ***** 2026-01-28 00:35:54.950585 | orchestrator | ok: [testbed-manager] 2026-01-28 00:35:54.950596 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:35:54.950606 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:35:54.950617 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:35:54.950628 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:35:54.950639 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:35:54.950649 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:35:54.950660 | orchestrator | 2026-01-28 00:35:54.950671 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-01-28 00:35:54.950694 | orchestrator | Wednesday 28 January 2026 00:35:32 +0000 (0:00:00.762) 0:07:21.472 ***** 2026-01-28 00:35:54.950705 | orchestrator | ok: [testbed-manager] 2026-01-28 00:35:54.950716 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:35:54.950727 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:35:54.950738 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:35:54.950748 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:35:54.950759 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:35:54.950770 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:35:54.950781 | orchestrator | 2026-01-28 00:35:54.950812 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-01-28 00:35:54.950824 | orchestrator | Wednesday 28 January 2026 00:35:37 +0000 (0:00:05.430) 0:07:26.902 ***** 2026-01-28 00:35:54.950835 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:35:54.950846 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:35:54.950857 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:35:54.950867 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:35:54.950878 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:35:54.950889 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:35:54.950900 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:35:54.950910 | orchestrator | 2026-01-28 00:35:54.950921 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-01-28 00:35:54.950932 | orchestrator | Wednesday 28 January 2026 00:35:38 +0000 (0:00:00.507) 0:07:27.410 ***** 2026-01-28 00:35:54.950945 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:35:54.950958 | orchestrator | 2026-01-28 00:35:54.950970 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-01-28 00:35:54.950980 | orchestrator | Wednesday 28 January 2026 00:35:39 +0000 (0:00:00.764) 0:07:28.175 ***** 2026-01-28 00:35:54.950991 | orchestrator | ok: [testbed-manager] 2026-01-28 00:35:54.951002 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:35:54.951012 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:35:54.951023 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:35:54.951034 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:35:54.951045 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:35:54.951056 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:35:54.951108 | orchestrator | 2026-01-28 00:35:54.951119 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-01-28 00:35:54.951130 | orchestrator | Wednesday 28 January 2026 00:35:40 +0000 (0:00:01.908) 0:07:30.084 ***** 2026-01-28 00:35:54.951141 | orchestrator | ok: [testbed-manager] 2026-01-28 00:35:54.951152 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:35:54.951163 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:35:54.951174 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:35:54.951184 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:35:54.951195 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:35:54.951206 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:35:54.951217 | orchestrator | 2026-01-28 00:35:54.951228 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-01-28 00:35:54.951239 | orchestrator | Wednesday 28 January 2026 00:35:42 +0000 (0:00:01.088) 0:07:31.172 ***** 2026-01-28 00:35:54.951250 | orchestrator | ok: [testbed-manager] 2026-01-28 00:35:54.951260 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:35:54.951271 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:35:54.951282 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:35:54.951293 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:35:54.951304 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:35:54.951314 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:35:54.951325 | orchestrator | 2026-01-28 00:35:54.951336 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-01-28 00:35:54.951347 | orchestrator | Wednesday 28 January 2026 00:35:42 +0000 (0:00:00.848) 0:07:32.021 ***** 2026-01-28 00:35:54.951365 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-28 00:35:54.951378 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-28 00:35:54.951389 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-28 00:35:54.951406 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-28 00:35:54.951416 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-28 00:35:54.951427 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-28 00:35:54.951438 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-01-28 00:35:54.951449 | orchestrator | 2026-01-28 00:35:54.951460 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-01-28 00:35:54.951471 | orchestrator | Wednesday 28 January 2026 00:35:44 +0000 (0:00:01.859) 0:07:33.880 ***** 2026-01-28 00:35:54.951482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:35:54.951493 | orchestrator | 2026-01-28 00:35:54.951504 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-01-28 00:35:54.951514 | orchestrator | Wednesday 28 January 2026 00:35:45 +0000 (0:00:00.730) 0:07:34.611 ***** 2026-01-28 00:35:54.951525 | orchestrator | changed: [testbed-manager] 2026-01-28 00:35:54.951536 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:35:54.951547 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:35:54.951558 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:35:54.951568 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:35:54.951579 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:35:54.951590 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:35:54.951601 | orchestrator | 2026-01-28 00:35:54.951620 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-01-28 00:36:27.028432 | orchestrator | Wednesday 28 January 2026 00:35:54 +0000 (0:00:09.451) 0:07:44.063 ***** 2026-01-28 00:36:27.028539 | orchestrator | ok: [testbed-manager] 2026-01-28 00:36:27.028557 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:36:27.028569 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:36:27.028580 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:36:27.028592 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:36:27.028602 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:36:27.028613 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:36:27.028625 | orchestrator | 2026-01-28 00:36:27.028637 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-01-28 00:36:27.028648 | orchestrator | Wednesday 28 January 2026 00:35:56 +0000 (0:00:02.032) 0:07:46.096 ***** 2026-01-28 00:36:27.028665 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:36:27.028684 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:36:27.028700 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:36:27.028717 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:36:27.028734 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:36:27.028752 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:36:27.028772 | orchestrator | 2026-01-28 00:36:27.028792 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-01-28 00:36:27.028810 | orchestrator | Wednesday 28 January 2026 00:35:58 +0000 (0:00:01.288) 0:07:47.385 ***** 2026-01-28 00:36:27.028826 | orchestrator | changed: [testbed-manager] 2026-01-28 00:36:27.028861 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:36:27.028873 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:36:27.028884 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:36:27.028894 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:36:27.028905 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:36:27.028916 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:36:27.028927 | orchestrator | 2026-01-28 00:36:27.028940 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-01-28 00:36:27.028952 | orchestrator | 2026-01-28 00:36:27.028966 | orchestrator | TASK [Include hardening role] ************************************************** 2026-01-28 00:36:27.028978 | orchestrator | Wednesday 28 January 2026 00:35:59 +0000 (0:00:01.260) 0:07:48.646 ***** 2026-01-28 00:36:27.028991 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:36:27.029004 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:36:27.029017 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:36:27.029030 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:36:27.029042 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:36:27.029081 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:36:27.029095 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:36:27.029108 | orchestrator | 2026-01-28 00:36:27.029121 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-01-28 00:36:27.029134 | orchestrator | 2026-01-28 00:36:27.029146 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-01-28 00:36:27.029159 | orchestrator | Wednesday 28 January 2026 00:36:00 +0000 (0:00:00.787) 0:07:49.433 ***** 2026-01-28 00:36:27.029172 | orchestrator | changed: [testbed-manager] 2026-01-28 00:36:27.029184 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:36:27.029197 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:36:27.029210 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:36:27.029261 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:36:27.029286 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:36:27.029298 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:36:27.029309 | orchestrator | 2026-01-28 00:36:27.029320 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-01-28 00:36:27.029331 | orchestrator | Wednesday 28 January 2026 00:36:01 +0000 (0:00:01.375) 0:07:50.808 ***** 2026-01-28 00:36:27.029342 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:36:27.029365 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:36:27.029376 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:36:27.029387 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:36:27.029397 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:36:27.029410 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:36:27.029429 | orchestrator | ok: [testbed-manager] 2026-01-28 00:36:27.029449 | orchestrator | 2026-01-28 00:36:27.029468 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-01-28 00:36:27.029503 | orchestrator | Wednesday 28 January 2026 00:36:03 +0000 (0:00:02.095) 0:07:52.904 ***** 2026-01-28 00:36:27.029515 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:36:27.029526 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:36:27.029537 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:36:27.029547 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:36:27.029558 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:36:27.029569 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:36:27.029579 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:36:27.029590 | orchestrator | 2026-01-28 00:36:27.029601 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-01-28 00:36:27.029613 | orchestrator | Wednesday 28 January 2026 00:36:04 +0000 (0:00:00.535) 0:07:53.439 ***** 2026-01-28 00:36:27.029625 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:36:27.029637 | orchestrator | 2026-01-28 00:36:27.029648 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-01-28 00:36:27.029672 | orchestrator | Wednesday 28 January 2026 00:36:05 +0000 (0:00:01.016) 0:07:54.456 ***** 2026-01-28 00:36:27.029685 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:36:27.029698 | orchestrator | 2026-01-28 00:36:27.029709 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-01-28 00:36:27.029720 | orchestrator | Wednesday 28 January 2026 00:36:06 +0000 (0:00:00.732) 0:07:55.189 ***** 2026-01-28 00:36:27.029730 | orchestrator | changed: [testbed-manager] 2026-01-28 00:36:27.029741 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:36:27.029752 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:36:27.029763 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:36:27.029774 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:36:27.029785 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:36:27.029796 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:36:27.029806 | orchestrator | 2026-01-28 00:36:27.029837 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-01-28 00:36:27.029849 | orchestrator | Wednesday 28 January 2026 00:36:14 +0000 (0:00:08.682) 0:08:03.872 ***** 2026-01-28 00:36:27.029859 | orchestrator | changed: [testbed-manager] 2026-01-28 00:36:27.029870 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:36:27.029881 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:36:27.029891 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:36:27.029902 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:36:27.029913 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:36:27.029923 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:36:27.029934 | orchestrator | 2026-01-28 00:36:27.029944 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-01-28 00:36:27.029955 | orchestrator | Wednesday 28 January 2026 00:36:15 +0000 (0:00:01.118) 0:08:04.991 ***** 2026-01-28 00:36:27.029966 | orchestrator | changed: [testbed-manager] 2026-01-28 00:36:27.029977 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:36:27.029987 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:36:27.029998 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:36:27.030009 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:36:27.030107 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:36:27.030121 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:36:27.030132 | orchestrator | 2026-01-28 00:36:27.030143 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-01-28 00:36:27.030154 | orchestrator | Wednesday 28 January 2026 00:36:17 +0000 (0:00:01.502) 0:08:06.493 ***** 2026-01-28 00:36:27.030165 | orchestrator | changed: [testbed-manager] 2026-01-28 00:36:27.030175 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:36:27.030186 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:36:27.030197 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:36:27.030207 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:36:27.030218 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:36:27.030229 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:36:27.030239 | orchestrator | 2026-01-28 00:36:27.030250 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-01-28 00:36:27.030261 | orchestrator | Wednesday 28 January 2026 00:36:19 +0000 (0:00:01.988) 0:08:08.482 ***** 2026-01-28 00:36:27.030272 | orchestrator | changed: [testbed-manager] 2026-01-28 00:36:27.030282 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:36:27.030293 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:36:27.030304 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:36:27.030315 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:36:27.030325 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:36:27.030336 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:36:27.030347 | orchestrator | 2026-01-28 00:36:27.030358 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-01-28 00:36:27.030378 | orchestrator | Wednesday 28 January 2026 00:36:20 +0000 (0:00:01.309) 0:08:09.791 ***** 2026-01-28 00:36:27.030389 | orchestrator | changed: [testbed-manager] 2026-01-28 00:36:27.030399 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:36:27.030410 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:36:27.030421 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:36:27.030432 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:36:27.030443 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:36:27.030453 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:36:27.030464 | orchestrator | 2026-01-28 00:36:27.030475 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-01-28 00:36:27.030485 | orchestrator | 2026-01-28 00:36:27.030496 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-01-28 00:36:27.030521 | orchestrator | Wednesday 28 January 2026 00:36:21 +0000 (0:00:01.188) 0:08:10.980 ***** 2026-01-28 00:36:27.030532 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:36:27.030544 | orchestrator | 2026-01-28 00:36:27.030565 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-28 00:36:27.030582 | orchestrator | Wednesday 28 January 2026 00:36:22 +0000 (0:00:00.810) 0:08:11.790 ***** 2026-01-28 00:36:27.030593 | orchestrator | ok: [testbed-manager] 2026-01-28 00:36:27.030604 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:36:27.030615 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:36:27.030626 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:36:27.030637 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:36:27.030648 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:36:27.030658 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:36:27.030669 | orchestrator | 2026-01-28 00:36:27.030680 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-28 00:36:27.030692 | orchestrator | Wednesday 28 January 2026 00:36:23 +0000 (0:00:01.134) 0:08:12.925 ***** 2026-01-28 00:36:27.030703 | orchestrator | changed: [testbed-manager] 2026-01-28 00:36:27.030714 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:36:27.030724 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:36:27.030735 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:36:27.030746 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:36:27.030757 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:36:27.030768 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:36:27.030778 | orchestrator | 2026-01-28 00:36:27.030789 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-01-28 00:36:27.030800 | orchestrator | Wednesday 28 January 2026 00:36:25 +0000 (0:00:01.225) 0:08:14.150 ***** 2026-01-28 00:36:27.030811 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:36:27.030822 | orchestrator | 2026-01-28 00:36:27.030833 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-01-28 00:36:27.030844 | orchestrator | Wednesday 28 January 2026 00:36:26 +0000 (0:00:01.081) 0:08:15.232 ***** 2026-01-28 00:36:27.030855 | orchestrator | ok: [testbed-manager] 2026-01-28 00:36:27.030866 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:36:27.030876 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:36:27.030887 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:36:27.030898 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:36:27.030909 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:36:27.030920 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:36:27.030931 | orchestrator | 2026-01-28 00:36:27.030949 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-01-28 00:36:28.645378 | orchestrator | Wednesday 28 January 2026 00:36:27 +0000 (0:00:00.918) 0:08:16.150 ***** 2026-01-28 00:36:28.645481 | orchestrator | changed: [testbed-manager] 2026-01-28 00:36:28.645499 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:36:28.645511 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:36:28.645552 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:36:28.645563 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:36:28.645575 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:36:28.645586 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:36:28.645597 | orchestrator | 2026-01-28 00:36:28.645609 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:36:28.645621 | orchestrator | testbed-manager : ok=168  changed=41  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-01-28 00:36:28.645634 | orchestrator | testbed-node-0 : ok=177  changed=70  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-28 00:36:28.645645 | orchestrator | testbed-node-1 : ok=177  changed=70  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-28 00:36:28.645656 | orchestrator | testbed-node-2 : ok=177  changed=70  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-01-28 00:36:28.645667 | orchestrator | testbed-node-3 : ok=175  changed=66  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-01-28 00:36:28.645678 | orchestrator | testbed-node-4 : ok=175  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-28 00:36:28.645688 | orchestrator | testbed-node-5 : ok=175  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-01-28 00:36:28.645699 | orchestrator | 2026-01-28 00:36:28.645710 | orchestrator | 2026-01-28 00:36:28.645721 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:36:28.645732 | orchestrator | Wednesday 28 January 2026 00:36:28 +0000 (0:00:01.120) 0:08:17.271 ***** 2026-01-28 00:36:28.645743 | orchestrator | =============================================================================== 2026-01-28 00:36:28.645754 | orchestrator | osism.commons.packages : Install required packages --------------------- 72.00s 2026-01-28 00:36:28.645765 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.07s 2026-01-28 00:36:28.645776 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.18s 2026-01-28 00:36:28.645787 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.82s 2026-01-28 00:36:28.645798 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.62s 2026-01-28 00:36:28.645810 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.00s 2026-01-28 00:36:28.645820 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.43s 2026-01-28 00:36:28.645831 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.83s 2026-01-28 00:36:28.645842 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.83s 2026-01-28 00:36:28.645868 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.45s 2026-01-28 00:36:28.645879 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.74s 2026-01-28 00:36:28.645891 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.68s 2026-01-28 00:36:28.645902 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.47s 2026-01-28 00:36:28.645912 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.23s 2026-01-28 00:36:28.645925 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.09s 2026-01-28 00:36:28.645937 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.64s 2026-01-28 00:36:28.645950 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 7.11s 2026-01-28 00:36:28.645963 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.96s 2026-01-28 00:36:28.645985 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.30s 2026-01-28 00:36:28.645998 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.73s 2026-01-28 00:36:29.000329 | orchestrator | + osism apply fail2ban 2026-01-28 00:36:41.811623 | orchestrator | 2026-01-28 00:36:41 | INFO  | Task fd02808d-80b7-411b-a4e6-653825bcbf4b (fail2ban) was prepared for execution. 2026-01-28 00:36:41.811706 | orchestrator | 2026-01-28 00:36:41 | INFO  | It takes a moment until task fd02808d-80b7-411b-a4e6-653825bcbf4b (fail2ban) has been started and output is visible here. 2026-01-28 00:37:04.511323 | orchestrator | 2026-01-28 00:37:04.511438 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-01-28 00:37:04.511458 | orchestrator | 2026-01-28 00:37:04.511475 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-01-28 00:37:04.511491 | orchestrator | Wednesday 28 January 2026 00:36:46 +0000 (0:00:00.274) 0:00:00.274 ***** 2026-01-28 00:37:04.511504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:37:04.511517 | orchestrator | 2026-01-28 00:37:04.511526 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-01-28 00:37:04.511535 | orchestrator | Wednesday 28 January 2026 00:36:47 +0000 (0:00:01.194) 0:00:01.468 ***** 2026-01-28 00:37:04.511544 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:37:04.511555 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:37:04.511564 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:37:04.511573 | orchestrator | changed: [testbed-manager] 2026-01-28 00:37:04.511583 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:37:04.511591 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:37:04.511600 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:37:04.511609 | orchestrator | 2026-01-28 00:37:04.511618 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-01-28 00:37:04.511626 | orchestrator | Wednesday 28 January 2026 00:36:59 +0000 (0:00:11.579) 0:00:13.048 ***** 2026-01-28 00:37:04.511635 | orchestrator | changed: [testbed-manager] 2026-01-28 00:37:04.511644 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:37:04.511653 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:37:04.511661 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:37:04.511670 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:37:04.511678 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:37:04.511687 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:37:04.511699 | orchestrator | 2026-01-28 00:37:04.511714 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-01-28 00:37:04.511727 | orchestrator | Wednesday 28 January 2026 00:37:00 +0000 (0:00:01.548) 0:00:14.597 ***** 2026-01-28 00:37:04.511736 | orchestrator | ok: [testbed-manager] 2026-01-28 00:37:04.511746 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:37:04.511755 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:37:04.511764 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:37:04.511772 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:37:04.511781 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:37:04.511790 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:37:04.511799 | orchestrator | 2026-01-28 00:37:04.511807 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-01-28 00:37:04.511816 | orchestrator | Wednesday 28 January 2026 00:37:02 +0000 (0:00:01.536) 0:00:16.134 ***** 2026-01-28 00:37:04.511825 | orchestrator | changed: [testbed-manager] 2026-01-28 00:37:04.511834 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:37:04.511843 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:37:04.511852 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:37:04.511862 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:37:04.511877 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:37:04.511895 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:37:04.511943 | orchestrator | 2026-01-28 00:37:04.511956 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:37:04.511966 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:37:04.511978 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:37:04.511989 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:37:04.511999 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:37:04.512009 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:37:04.512017 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:37:04.512026 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:37:04.512035 | orchestrator | 2026-01-28 00:37:04.512044 | orchestrator | 2026-01-28 00:37:04.512053 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:37:04.512087 | orchestrator | Wednesday 28 January 2026 00:37:04 +0000 (0:00:01.753) 0:00:17.887 ***** 2026-01-28 00:37:04.512097 | orchestrator | =============================================================================== 2026-01-28 00:37:04.512106 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.58s 2026-01-28 00:37:04.512115 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.75s 2026-01-28 00:37:04.512124 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.55s 2026-01-28 00:37:04.512132 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.54s 2026-01-28 00:37:04.512141 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.19s 2026-01-28 00:37:04.897656 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-01-28 00:37:04.897734 | orchestrator | + osism apply network 2026-01-28 00:37:17.060754 | orchestrator | 2026-01-28 00:37:17 | INFO  | Task 32cf1b1d-06c8-48a2-a213-d1a05fe69836 (network) was prepared for execution. 2026-01-28 00:37:17.060874 | orchestrator | 2026-01-28 00:37:17 | INFO  | It takes a moment until task 32cf1b1d-06c8-48a2-a213-d1a05fe69836 (network) has been started and output is visible here. 2026-01-28 00:37:46.510853 | orchestrator | 2026-01-28 00:37:46.510946 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-01-28 00:37:46.510959 | orchestrator | 2026-01-28 00:37:46.510968 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-01-28 00:37:46.510976 | orchestrator | Wednesday 28 January 2026 00:37:21 +0000 (0:00:00.255) 0:00:00.255 ***** 2026-01-28 00:37:46.510985 | orchestrator | ok: [testbed-manager] 2026-01-28 00:37:46.510995 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:37:46.511004 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:37:46.511012 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:37:46.511019 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:37:46.511027 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:37:46.511035 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:37:46.511043 | orchestrator | 2026-01-28 00:37:46.511052 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-01-28 00:37:46.511060 | orchestrator | Wednesday 28 January 2026 00:37:22 +0000 (0:00:00.753) 0:00:01.009 ***** 2026-01-28 00:37:46.511069 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:37:46.511129 | orchestrator | 2026-01-28 00:37:46.511138 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-01-28 00:37:46.511146 | orchestrator | Wednesday 28 January 2026 00:37:23 +0000 (0:00:01.257) 0:00:02.266 ***** 2026-01-28 00:37:46.511154 | orchestrator | ok: [testbed-manager] 2026-01-28 00:37:46.511162 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:37:46.511170 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:37:46.511178 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:37:46.511186 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:37:46.511194 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:37:46.511201 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:37:46.511209 | orchestrator | 2026-01-28 00:37:46.511217 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-01-28 00:37:46.511226 | orchestrator | Wednesday 28 January 2026 00:37:25 +0000 (0:00:02.212) 0:00:04.479 ***** 2026-01-28 00:37:46.511233 | orchestrator | ok: [testbed-manager] 2026-01-28 00:37:46.511241 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:37:46.511249 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:37:46.511257 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:37:46.511265 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:37:46.511273 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:37:46.511281 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:37:46.511288 | orchestrator | 2026-01-28 00:37:46.511297 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-01-28 00:37:46.511304 | orchestrator | Wednesday 28 January 2026 00:37:27 +0000 (0:00:01.793) 0:00:06.272 ***** 2026-01-28 00:37:46.511313 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-01-28 00:37:46.511321 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-01-28 00:37:46.511329 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-01-28 00:37:46.511337 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-01-28 00:37:46.511345 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-01-28 00:37:46.511352 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-01-28 00:37:46.511360 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-01-28 00:37:46.511368 | orchestrator | 2026-01-28 00:37:46.511376 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-01-28 00:37:46.511401 | orchestrator | Wednesday 28 January 2026 00:37:28 +0000 (0:00:00.992) 0:00:07.265 ***** 2026-01-28 00:37:46.511411 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-28 00:37:46.511421 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-28 00:37:46.511430 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-28 00:37:46.511440 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-28 00:37:46.511448 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-28 00:37:46.511458 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-28 00:37:46.511467 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-28 00:37:46.511476 | orchestrator | 2026-01-28 00:37:46.511489 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-01-28 00:37:46.511498 | orchestrator | Wednesday 28 January 2026 00:37:31 +0000 (0:00:03.365) 0:00:10.630 ***** 2026-01-28 00:37:46.511508 | orchestrator | changed: [testbed-manager] 2026-01-28 00:37:46.511517 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:37:46.511526 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:37:46.511535 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:37:46.511544 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:37:46.511553 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:37:46.511562 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:37:46.511571 | orchestrator | 2026-01-28 00:37:46.511580 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-01-28 00:37:46.511589 | orchestrator | Wednesday 28 January 2026 00:37:33 +0000 (0:00:01.626) 0:00:12.257 ***** 2026-01-28 00:37:46.511598 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-28 00:37:46.511614 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-28 00:37:46.511624 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-28 00:37:46.511633 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-28 00:37:46.511642 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-28 00:37:46.511651 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-28 00:37:46.511660 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-28 00:37:46.511669 | orchestrator | 2026-01-28 00:37:46.511678 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-01-28 00:37:46.511687 | orchestrator | Wednesday 28 January 2026 00:37:35 +0000 (0:00:01.823) 0:00:14.080 ***** 2026-01-28 00:37:46.511697 | orchestrator | ok: [testbed-manager] 2026-01-28 00:37:46.511706 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:37:46.511715 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:37:46.511724 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:37:46.511733 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:37:46.511743 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:37:46.511751 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:37:46.511761 | orchestrator | 2026-01-28 00:37:46.511769 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-01-28 00:37:46.511791 | orchestrator | Wednesday 28 January 2026 00:37:36 +0000 (0:00:01.230) 0:00:15.310 ***** 2026-01-28 00:37:46.511799 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:37:46.511807 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:37:46.511815 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:37:46.511823 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:37:46.511831 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:37:46.511839 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:37:46.511846 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:37:46.511854 | orchestrator | 2026-01-28 00:37:46.511862 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-01-28 00:37:46.511870 | orchestrator | Wednesday 28 January 2026 00:37:37 +0000 (0:00:00.701) 0:00:16.012 ***** 2026-01-28 00:37:46.511878 | orchestrator | ok: [testbed-manager] 2026-01-28 00:37:46.511886 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:37:46.511894 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:37:46.511901 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:37:46.511909 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:37:46.511917 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:37:46.511925 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:37:46.511933 | orchestrator | 2026-01-28 00:37:46.511941 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-01-28 00:37:46.511949 | orchestrator | Wednesday 28 January 2026 00:37:39 +0000 (0:00:02.293) 0:00:18.306 ***** 2026-01-28 00:37:46.511956 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:37:46.511964 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:37:46.511972 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:37:46.511980 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:37:46.511988 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:37:46.511996 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:37:46.512004 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-01-28 00:37:46.512013 | orchestrator | 2026-01-28 00:37:46.512021 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-01-28 00:37:46.512029 | orchestrator | Wednesday 28 January 2026 00:37:40 +0000 (0:00:00.897) 0:00:19.204 ***** 2026-01-28 00:37:46.512037 | orchestrator | ok: [testbed-manager] 2026-01-28 00:37:46.512045 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:37:46.512053 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:37:46.512060 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:37:46.512068 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:37:46.512091 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:37:46.512099 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:37:46.512107 | orchestrator | 2026-01-28 00:37:46.512115 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-01-28 00:37:46.512128 | orchestrator | Wednesday 28 January 2026 00:37:42 +0000 (0:00:01.662) 0:00:20.867 ***** 2026-01-28 00:37:46.512136 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:37:46.512146 | orchestrator | 2026-01-28 00:37:46.512154 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-28 00:37:46.512162 | orchestrator | Wednesday 28 January 2026 00:37:43 +0000 (0:00:01.328) 0:00:22.195 ***** 2026-01-28 00:37:46.512169 | orchestrator | ok: [testbed-manager] 2026-01-28 00:37:46.512177 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:37:46.512185 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:37:46.512193 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:37:46.512201 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:37:46.512208 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:37:46.512216 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:37:46.512224 | orchestrator | 2026-01-28 00:37:46.512232 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-01-28 00:37:46.512240 | orchestrator | Wednesday 28 January 2026 00:37:44 +0000 (0:00:01.161) 0:00:23.356 ***** 2026-01-28 00:37:46.512248 | orchestrator | ok: [testbed-manager] 2026-01-28 00:37:46.512255 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:37:46.512263 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:37:46.512271 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:37:46.512283 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:37:46.512291 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:37:46.512298 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:37:46.512306 | orchestrator | 2026-01-28 00:37:46.512314 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-28 00:37:46.512322 | orchestrator | Wednesday 28 January 2026 00:37:45 +0000 (0:00:00.656) 0:00:24.013 ***** 2026-01-28 00:37:46.512330 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-01-28 00:37:46.512338 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-01-28 00:37:46.512346 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-01-28 00:37:46.512354 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-01-28 00:37:46.512362 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-28 00:37:46.512370 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-01-28 00:37:46.512377 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-28 00:37:46.512385 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-01-28 00:37:46.512393 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-28 00:37:46.512400 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-28 00:37:46.512408 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-28 00:37:46.512416 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-01-28 00:37:46.512424 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-28 00:37:46.512432 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-01-28 00:37:46.512440 | orchestrator | 2026-01-28 00:37:46.512452 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-01-28 00:38:03.983875 | orchestrator | Wednesday 28 January 2026 00:37:46 +0000 (0:00:01.260) 0:00:25.273 ***** 2026-01-28 00:38:03.983996 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:38:03.984019 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:38:03.984035 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:38:03.984050 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:38:03.984119 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:38:03.984136 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:38:03.984152 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:38:03.984167 | orchestrator | 2026-01-28 00:38:03.984185 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-01-28 00:38:03.984201 | orchestrator | Wednesday 28 January 2026 00:37:47 +0000 (0:00:00.650) 0:00:25.924 ***** 2026-01-28 00:38:03.984218 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-4, testbed-node-1, testbed-node-3, testbed-node-2, testbed-node-5 2026-01-28 00:38:03.984235 | orchestrator | 2026-01-28 00:38:03.984250 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-01-28 00:38:03.984264 | orchestrator | Wednesday 28 January 2026 00:37:51 +0000 (0:00:04.568) 0:00:30.493 ***** 2026-01-28 00:38:03.984281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-28 00:38:03.984297 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-28 00:38:03.984327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-28 00:38:03.984342 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-28 00:38:03.984357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-28 00:38:03.984373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-28 00:38:03.984398 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-28 00:38:03.984414 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-28 00:38:03.984430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-28 00:38:03.984446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-28 00:38:03.984462 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-28 00:38:03.984509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-28 00:38:03.984527 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-28 00:38:03.984542 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-28 00:38:03.984578 | orchestrator | 2026-01-28 00:38:03.984593 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-01-28 00:38:03.984609 | orchestrator | Wednesday 28 January 2026 00:37:58 +0000 (0:00:06.313) 0:00:36.806 ***** 2026-01-28 00:38:03.984624 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-01-28 00:38:03.984640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-01-28 00:38:03.984653 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-01-28 00:38:03.984667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-01-28 00:38:03.984681 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-01-28 00:38:03.984696 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-01-28 00:38:03.984711 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-01-28 00:38:03.984731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-01-28 00:38:03.984748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-01-28 00:38:03.984763 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-01-28 00:38:03.984788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-01-28 00:38:03.984803 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-01-28 00:38:03.984832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-01-28 00:38:10.364997 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-01-28 00:38:10.365133 | orchestrator | 2026-01-28 00:38:10.365152 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-01-28 00:38:10.365166 | orchestrator | Wednesday 28 January 2026 00:38:03 +0000 (0:00:05.937) 0:00:42.743 ***** 2026-01-28 00:38:10.365180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:38:10.365191 | orchestrator | 2026-01-28 00:38:10.365202 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-01-28 00:38:10.365213 | orchestrator | Wednesday 28 January 2026 00:38:05 +0000 (0:00:01.280) 0:00:44.024 ***** 2026-01-28 00:38:10.365225 | orchestrator | ok: [testbed-manager] 2026-01-28 00:38:10.365239 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:38:10.365250 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:38:10.365261 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:38:10.365272 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:38:10.365283 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:38:10.365293 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:38:10.365304 | orchestrator | 2026-01-28 00:38:10.365315 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-01-28 00:38:10.365326 | orchestrator | Wednesday 28 January 2026 00:38:06 +0000 (0:00:01.236) 0:00:45.260 ***** 2026-01-28 00:38:10.365337 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-28 00:38:10.365349 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-28 00:38:10.365360 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-28 00:38:10.365371 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-28 00:38:10.365381 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:38:10.365394 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-28 00:38:10.365405 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-28 00:38:10.365416 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-28 00:38:10.365427 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-28 00:38:10.365438 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:38:10.365449 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-28 00:38:10.365460 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-28 00:38:10.365493 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-28 00:38:10.365505 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-28 00:38:10.365516 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:38:10.365530 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-28 00:38:10.365573 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-28 00:38:10.365598 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-28 00:38:10.365610 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-28 00:38:10.365623 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:38:10.365636 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-28 00:38:10.365649 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-28 00:38:10.365661 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-28 00:38:10.365673 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-28 00:38:10.365686 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:38:10.365698 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-28 00:38:10.365710 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-28 00:38:10.365722 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-28 00:38:10.365734 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-28 00:38:10.365747 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:38:10.365760 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-01-28 00:38:10.365772 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-01-28 00:38:10.365785 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-01-28 00:38:10.365797 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-01-28 00:38:10.365809 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:38:10.365822 | orchestrator | 2026-01-28 00:38:10.365834 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-01-28 00:38:10.365863 | orchestrator | Wednesday 28 January 2026 00:38:08 +0000 (0:00:02.075) 0:00:47.336 ***** 2026-01-28 00:38:10.365877 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:38:10.365890 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:38:10.365901 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:38:10.365912 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:38:10.365923 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:38:10.365933 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:38:10.365944 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:38:10.365955 | orchestrator | 2026-01-28 00:38:10.365966 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-01-28 00:38:10.365977 | orchestrator | Wednesday 28 January 2026 00:38:09 +0000 (0:00:00.641) 0:00:47.977 ***** 2026-01-28 00:38:10.365988 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:38:10.365999 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:38:10.366010 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:38:10.366150 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:38:10.366201 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:38:10.366213 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:38:10.366224 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:38:10.366234 | orchestrator | 2026-01-28 00:38:10.366245 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:38:10.366258 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-28 00:38:10.366283 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-28 00:38:10.366294 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-28 00:38:10.366305 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-28 00:38:10.366315 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-28 00:38:10.366326 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-28 00:38:10.366337 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-28 00:38:10.366348 | orchestrator | 2026-01-28 00:38:10.366359 | orchestrator | 2026-01-28 00:38:10.366370 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:38:10.366381 | orchestrator | Wednesday 28 January 2026 00:38:09 +0000 (0:00:00.735) 0:00:48.712 ***** 2026-01-28 00:38:10.366392 | orchestrator | =============================================================================== 2026-01-28 00:38:10.366403 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.31s 2026-01-28 00:38:10.366413 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.94s 2026-01-28 00:38:10.366424 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.57s 2026-01-28 00:38:10.366435 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.37s 2026-01-28 00:38:10.366453 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.29s 2026-01-28 00:38:10.366464 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.21s 2026-01-28 00:38:10.366475 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.08s 2026-01-28 00:38:10.366486 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.82s 2026-01-28 00:38:10.366496 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.79s 2026-01-28 00:38:10.366507 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.66s 2026-01-28 00:38:10.366518 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.63s 2026-01-28 00:38:10.366529 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.33s 2026-01-28 00:38:10.366539 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.28s 2026-01-28 00:38:10.366550 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.26s 2026-01-28 00:38:10.366561 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.26s 2026-01-28 00:38:10.366571 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.24s 2026-01-28 00:38:10.366582 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.23s 2026-01-28 00:38:10.366593 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.16s 2026-01-28 00:38:10.366603 | orchestrator | osism.commons.network : Create required directories --------------------- 0.99s 2026-01-28 00:38:10.366614 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.90s 2026-01-28 00:38:10.674070 | orchestrator | + osism apply wireguard 2026-01-28 00:38:22.619898 | orchestrator | 2026-01-28 00:38:22 | INFO  | Task 534ab4a9-bf64-4689-90ff-7aa5857fc9bd (wireguard) was prepared for execution. 2026-01-28 00:38:22.620009 | orchestrator | 2026-01-28 00:38:22 | INFO  | It takes a moment until task 534ab4a9-bf64-4689-90ff-7aa5857fc9bd (wireguard) has been started and output is visible here. 2026-01-28 00:38:43.168484 | orchestrator | 2026-01-28 00:38:43.168603 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-01-28 00:38:43.168624 | orchestrator | 2026-01-28 00:38:43.168641 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-01-28 00:38:43.168658 | orchestrator | Wednesday 28 January 2026 00:38:26 +0000 (0:00:00.215) 0:00:00.215 ***** 2026-01-28 00:38:43.168674 | orchestrator | ok: [testbed-manager] 2026-01-28 00:38:43.168691 | orchestrator | 2026-01-28 00:38:43.168711 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-01-28 00:38:43.168728 | orchestrator | Wednesday 28 January 2026 00:38:28 +0000 (0:00:01.549) 0:00:01.764 ***** 2026-01-28 00:38:43.168744 | orchestrator | changed: [testbed-manager] 2026-01-28 00:38:43.168761 | orchestrator | 2026-01-28 00:38:43.168778 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-01-28 00:38:43.168794 | orchestrator | Wednesday 28 January 2026 00:38:35 +0000 (0:00:06.752) 0:00:08.516 ***** 2026-01-28 00:38:43.168810 | orchestrator | changed: [testbed-manager] 2026-01-28 00:38:43.168826 | orchestrator | 2026-01-28 00:38:43.168842 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-01-28 00:38:43.168858 | orchestrator | Wednesday 28 January 2026 00:38:35 +0000 (0:00:00.549) 0:00:09.066 ***** 2026-01-28 00:38:43.168874 | orchestrator | changed: [testbed-manager] 2026-01-28 00:38:43.168890 | orchestrator | 2026-01-28 00:38:43.168906 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-01-28 00:38:43.168922 | orchestrator | Wednesday 28 January 2026 00:38:36 +0000 (0:00:00.432) 0:00:09.499 ***** 2026-01-28 00:38:43.168938 | orchestrator | ok: [testbed-manager] 2026-01-28 00:38:43.168954 | orchestrator | 2026-01-28 00:38:43.168969 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-01-28 00:38:43.168984 | orchestrator | Wednesday 28 January 2026 00:38:36 +0000 (0:00:00.704) 0:00:10.204 ***** 2026-01-28 00:38:43.168999 | orchestrator | ok: [testbed-manager] 2026-01-28 00:38:43.169014 | orchestrator | 2026-01-28 00:38:43.169030 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-01-28 00:38:43.169045 | orchestrator | Wednesday 28 January 2026 00:38:37 +0000 (0:00:00.442) 0:00:10.647 ***** 2026-01-28 00:38:43.169060 | orchestrator | ok: [testbed-manager] 2026-01-28 00:38:43.169075 | orchestrator | 2026-01-28 00:38:43.169166 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-01-28 00:38:43.169185 | orchestrator | Wednesday 28 January 2026 00:38:37 +0000 (0:00:00.424) 0:00:11.071 ***** 2026-01-28 00:38:43.169201 | orchestrator | changed: [testbed-manager] 2026-01-28 00:38:43.169217 | orchestrator | 2026-01-28 00:38:43.169233 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-01-28 00:38:43.169248 | orchestrator | Wednesday 28 January 2026 00:38:39 +0000 (0:00:01.284) 0:00:12.356 ***** 2026-01-28 00:38:43.169262 | orchestrator | changed: [testbed-manager] => (item=None) 2026-01-28 00:38:43.169277 | orchestrator | changed: [testbed-manager] 2026-01-28 00:38:43.169292 | orchestrator | 2026-01-28 00:38:43.169307 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-01-28 00:38:43.169322 | orchestrator | Wednesday 28 January 2026 00:38:39 +0000 (0:00:00.957) 0:00:13.313 ***** 2026-01-28 00:38:43.169337 | orchestrator | changed: [testbed-manager] 2026-01-28 00:38:43.169352 | orchestrator | 2026-01-28 00:38:43.169366 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-01-28 00:38:43.169381 | orchestrator | Wednesday 28 January 2026 00:38:41 +0000 (0:00:01.750) 0:00:15.064 ***** 2026-01-28 00:38:43.169396 | orchestrator | changed: [testbed-manager] 2026-01-28 00:38:43.169411 | orchestrator | 2026-01-28 00:38:43.169425 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:38:43.169441 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:38:43.169490 | orchestrator | 2026-01-28 00:38:43.169505 | orchestrator | 2026-01-28 00:38:43.169519 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:38:43.169549 | orchestrator | Wednesday 28 January 2026 00:38:42 +0000 (0:00:00.952) 0:00:16.017 ***** 2026-01-28 00:38:43.169563 | orchestrator | =============================================================================== 2026-01-28 00:38:43.169577 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.75s 2026-01-28 00:38:43.169591 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.75s 2026-01-28 00:38:43.169605 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.55s 2026-01-28 00:38:43.169618 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.28s 2026-01-28 00:38:43.169634 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.96s 2026-01-28 00:38:43.169649 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.95s 2026-01-28 00:38:43.169663 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.70s 2026-01-28 00:38:43.169678 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-01-28 00:38:43.169693 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.44s 2026-01-28 00:38:43.169708 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2026-01-28 00:38:43.169724 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-01-28 00:38:43.498081 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-01-28 00:38:43.539868 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-01-28 00:38:43.539960 | orchestrator | Dload Upload Total Spent Left Speed 2026-01-28 00:38:43.619602 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 187 0 --:--:-- --:--:-- --:--:-- 189 2026-01-28 00:38:43.635685 | orchestrator | + osism apply --environment custom workarounds 2026-01-28 00:38:45.367239 | orchestrator | 2026-01-28 00:38:45 | INFO  | Trying to run play workarounds in environment custom 2026-01-28 00:38:55.463081 | orchestrator | 2026-01-28 00:38:55 | INFO  | Task 926b89cc-d526-4375-9b80-4b1a9b253d6c (workarounds) was prepared for execution. 2026-01-28 00:38:55.463250 | orchestrator | 2026-01-28 00:38:55 | INFO  | It takes a moment until task 926b89cc-d526-4375-9b80-4b1a9b253d6c (workarounds) has been started and output is visible here. 2026-01-28 00:39:21.035913 | orchestrator | 2026-01-28 00:39:21.036033 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 00:39:21.036058 | orchestrator | 2026-01-28 00:39:21.036078 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-01-28 00:39:21.036147 | orchestrator | Wednesday 28 January 2026 00:38:59 +0000 (0:00:00.128) 0:00:00.128 ***** 2026-01-28 00:39:21.036163 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-01-28 00:39:21.036175 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-01-28 00:39:21.036186 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-01-28 00:39:21.036197 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-01-28 00:39:21.036208 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-01-28 00:39:21.036219 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-01-28 00:39:21.036230 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-01-28 00:39:21.036241 | orchestrator | 2026-01-28 00:39:21.036252 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-01-28 00:39:21.036263 | orchestrator | 2026-01-28 00:39:21.036274 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-28 00:39:21.036312 | orchestrator | Wednesday 28 January 2026 00:39:00 +0000 (0:00:00.807) 0:00:00.935 ***** 2026-01-28 00:39:21.036324 | orchestrator | ok: [testbed-manager] 2026-01-28 00:39:21.036338 | orchestrator | 2026-01-28 00:39:21.036355 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-01-28 00:39:21.036374 | orchestrator | 2026-01-28 00:39:21.036391 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-01-28 00:39:21.036409 | orchestrator | Wednesday 28 January 2026 00:39:02 +0000 (0:00:02.212) 0:00:03.148 ***** 2026-01-28 00:39:21.036427 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:39:21.036445 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:39:21.036463 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:39:21.036481 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:39:21.036499 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:39:21.036519 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:39:21.036536 | orchestrator | 2026-01-28 00:39:21.036555 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-01-28 00:39:21.036574 | orchestrator | 2026-01-28 00:39:21.036594 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-01-28 00:39:21.036614 | orchestrator | Wednesday 28 January 2026 00:39:04 +0000 (0:00:01.848) 0:00:04.997 ***** 2026-01-28 00:39:21.036634 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-28 00:39:21.036656 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-28 00:39:21.036695 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-28 00:39:21.036713 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-28 00:39:21.036727 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-28 00:39:21.036740 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-01-28 00:39:21.036753 | orchestrator | 2026-01-28 00:39:21.036766 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-01-28 00:39:21.036779 | orchestrator | Wednesday 28 January 2026 00:39:06 +0000 (0:00:01.605) 0:00:06.602 ***** 2026-01-28 00:39:21.036792 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:39:21.036806 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:39:21.036817 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:39:21.036828 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:39:21.036839 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:39:21.036849 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:39:21.036860 | orchestrator | 2026-01-28 00:39:21.036872 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-01-28 00:39:21.036883 | orchestrator | Wednesday 28 January 2026 00:39:10 +0000 (0:00:03.916) 0:00:10.519 ***** 2026-01-28 00:39:21.036894 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:39:21.036904 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:39:21.036915 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:39:21.036926 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:39:21.036937 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:39:21.036948 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:39:21.036958 | orchestrator | 2026-01-28 00:39:21.036969 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-01-28 00:39:21.036980 | orchestrator | 2026-01-28 00:39:21.036991 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-01-28 00:39:21.037002 | orchestrator | Wednesday 28 January 2026 00:39:10 +0000 (0:00:00.580) 0:00:11.099 ***** 2026-01-28 00:39:21.037012 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:39:21.037023 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:39:21.037034 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:39:21.037055 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:39:21.037066 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:39:21.037076 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:39:21.037087 | orchestrator | changed: [testbed-manager] 2026-01-28 00:39:21.037125 | orchestrator | 2026-01-28 00:39:21.037137 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-01-28 00:39:21.037148 | orchestrator | Wednesday 28 January 2026 00:39:12 +0000 (0:00:01.405) 0:00:12.504 ***** 2026-01-28 00:39:21.037159 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:39:21.037170 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:39:21.037181 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:39:21.037192 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:39:21.037203 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:39:21.037214 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:39:21.037247 | orchestrator | changed: [testbed-manager] 2026-01-28 00:39:21.037259 | orchestrator | 2026-01-28 00:39:21.037270 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-01-28 00:39:21.037281 | orchestrator | Wednesday 28 January 2026 00:39:13 +0000 (0:00:01.502) 0:00:14.007 ***** 2026-01-28 00:39:21.037292 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:39:21.037303 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:39:21.037314 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:39:21.037325 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:39:21.037336 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:39:21.037347 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:39:21.037358 | orchestrator | ok: [testbed-manager] 2026-01-28 00:39:21.037368 | orchestrator | 2026-01-28 00:39:21.037379 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-01-28 00:39:21.037390 | orchestrator | Wednesday 28 January 2026 00:39:15 +0000 (0:00:01.468) 0:00:15.475 ***** 2026-01-28 00:39:21.037401 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:39:21.037412 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:39:21.037423 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:39:21.037434 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:39:21.037445 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:39:21.037456 | orchestrator | changed: [testbed-manager] 2026-01-28 00:39:21.037466 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:39:21.037477 | orchestrator | 2026-01-28 00:39:21.037488 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-01-28 00:39:21.037499 | orchestrator | Wednesday 28 January 2026 00:39:17 +0000 (0:00:02.061) 0:00:17.537 ***** 2026-01-28 00:39:21.037510 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:39:21.037521 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:39:21.037532 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:39:21.037543 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:39:21.037553 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:39:21.037572 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:39:21.037590 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:39:21.037609 | orchestrator | 2026-01-28 00:39:21.037628 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-01-28 00:39:21.037641 | orchestrator | 2026-01-28 00:39:21.037652 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-01-28 00:39:21.037663 | orchestrator | Wednesday 28 January 2026 00:39:17 +0000 (0:00:00.675) 0:00:18.213 ***** 2026-01-28 00:39:21.037674 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:39:21.037685 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:39:21.037695 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:39:21.037706 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:39:21.037717 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:39:21.037728 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:39:21.037739 | orchestrator | ok: [testbed-manager] 2026-01-28 00:39:21.037749 | orchestrator | 2026-01-28 00:39:21.037760 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:39:21.037783 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-28 00:39:21.037802 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:39:21.037813 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:39:21.037824 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:39:21.037835 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:39:21.037846 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:39:21.037857 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:39:21.037868 | orchestrator | 2026-01-28 00:39:21.037879 | orchestrator | 2026-01-28 00:39:21.037890 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:39:21.037901 | orchestrator | Wednesday 28 January 2026 00:39:21 +0000 (0:00:03.099) 0:00:21.313 ***** 2026-01-28 00:39:21.037912 | orchestrator | =============================================================================== 2026-01-28 00:39:21.037923 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.92s 2026-01-28 00:39:21.037934 | orchestrator | Install python3-docker -------------------------------------------------- 3.10s 2026-01-28 00:39:21.037945 | orchestrator | Apply netplan configuration --------------------------------------------- 2.21s 2026-01-28 00:39:21.037955 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.06s 2026-01-28 00:39:21.037966 | orchestrator | Apply netplan configuration --------------------------------------------- 1.85s 2026-01-28 00:39:21.037977 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.61s 2026-01-28 00:39:21.037988 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.50s 2026-01-28 00:39:21.037998 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.47s 2026-01-28 00:39:21.038009 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.41s 2026-01-28 00:39:21.038081 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.81s 2026-01-28 00:39:21.038132 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.68s 2026-01-28 00:39:21.038167 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.58s 2026-01-28 00:39:22.046740 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-01-28 00:39:34.235316 | orchestrator | 2026-01-28 00:39:34 | INFO  | Task f9951379-af80-4465-8479-2dec8043d9e5 (reboot) was prepared for execution. 2026-01-28 00:39:34.235422 | orchestrator | 2026-01-28 00:39:34 | INFO  | It takes a moment until task f9951379-af80-4465-8479-2dec8043d9e5 (reboot) has been started and output is visible here. 2026-01-28 00:39:44.505343 | orchestrator | 2026-01-28 00:39:44.505475 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-28 00:39:44.505503 | orchestrator | 2026-01-28 00:39:44.505524 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-28 00:39:44.505543 | orchestrator | Wednesday 28 January 2026 00:39:38 +0000 (0:00:00.209) 0:00:00.209 ***** 2026-01-28 00:39:44.505561 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:39:44.505582 | orchestrator | 2026-01-28 00:39:44.505599 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-28 00:39:44.505648 | orchestrator | Wednesday 28 January 2026 00:39:38 +0000 (0:00:00.113) 0:00:00.322 ***** 2026-01-28 00:39:44.505668 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:39:44.505688 | orchestrator | 2026-01-28 00:39:44.505707 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-28 00:39:44.505726 | orchestrator | Wednesday 28 January 2026 00:39:39 +0000 (0:00:00.975) 0:00:01.298 ***** 2026-01-28 00:39:44.505745 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:39:44.505763 | orchestrator | 2026-01-28 00:39:44.505781 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-28 00:39:44.505798 | orchestrator | 2026-01-28 00:39:44.505816 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-28 00:39:44.505833 | orchestrator | Wednesday 28 January 2026 00:39:39 +0000 (0:00:00.121) 0:00:01.419 ***** 2026-01-28 00:39:44.505851 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:39:44.505869 | orchestrator | 2026-01-28 00:39:44.505889 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-28 00:39:44.505909 | orchestrator | Wednesday 28 January 2026 00:39:39 +0000 (0:00:00.106) 0:00:01.526 ***** 2026-01-28 00:39:44.505926 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:39:44.505944 | orchestrator | 2026-01-28 00:39:44.505963 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-28 00:39:44.505981 | orchestrator | Wednesday 28 January 2026 00:39:40 +0000 (0:00:00.681) 0:00:02.208 ***** 2026-01-28 00:39:44.506000 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:39:44.506080 | orchestrator | 2026-01-28 00:39:44.506150 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-28 00:39:44.506171 | orchestrator | 2026-01-28 00:39:44.506188 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-28 00:39:44.506206 | orchestrator | Wednesday 28 January 2026 00:39:40 +0000 (0:00:00.111) 0:00:02.319 ***** 2026-01-28 00:39:44.506245 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:39:44.506264 | orchestrator | 2026-01-28 00:39:44.506281 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-28 00:39:44.506299 | orchestrator | Wednesday 28 January 2026 00:39:40 +0000 (0:00:00.286) 0:00:02.605 ***** 2026-01-28 00:39:44.506317 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:39:44.506334 | orchestrator | 2026-01-28 00:39:44.506352 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-28 00:39:44.506371 | orchestrator | Wednesday 28 January 2026 00:39:41 +0000 (0:00:00.702) 0:00:03.307 ***** 2026-01-28 00:39:44.506391 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:39:44.506409 | orchestrator | 2026-01-28 00:39:44.506427 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-28 00:39:44.506446 | orchestrator | 2026-01-28 00:39:44.506465 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-28 00:39:44.506484 | orchestrator | Wednesday 28 January 2026 00:39:41 +0000 (0:00:00.127) 0:00:03.435 ***** 2026-01-28 00:39:44.506503 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:39:44.506521 | orchestrator | 2026-01-28 00:39:44.506539 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-28 00:39:44.506556 | orchestrator | Wednesday 28 January 2026 00:39:41 +0000 (0:00:00.114) 0:00:03.549 ***** 2026-01-28 00:39:44.506573 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:39:44.506590 | orchestrator | 2026-01-28 00:39:44.506608 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-28 00:39:44.506625 | orchestrator | Wednesday 28 January 2026 00:39:42 +0000 (0:00:00.691) 0:00:04.241 ***** 2026-01-28 00:39:44.506643 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:39:44.506661 | orchestrator | 2026-01-28 00:39:44.506679 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-28 00:39:44.506697 | orchestrator | 2026-01-28 00:39:44.506715 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-28 00:39:44.506731 | orchestrator | Wednesday 28 January 2026 00:39:42 +0000 (0:00:00.116) 0:00:04.357 ***** 2026-01-28 00:39:44.506769 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:39:44.506787 | orchestrator | 2026-01-28 00:39:44.506805 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-28 00:39:44.506821 | orchestrator | Wednesday 28 January 2026 00:39:42 +0000 (0:00:00.112) 0:00:04.470 ***** 2026-01-28 00:39:44.506838 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:39:44.506855 | orchestrator | 2026-01-28 00:39:44.506873 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-28 00:39:44.506890 | orchestrator | Wednesday 28 January 2026 00:39:43 +0000 (0:00:00.729) 0:00:05.200 ***** 2026-01-28 00:39:44.506908 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:39:44.506924 | orchestrator | 2026-01-28 00:39:44.506941 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-01-28 00:39:44.506958 | orchestrator | 2026-01-28 00:39:44.506975 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-01-28 00:39:44.506993 | orchestrator | Wednesday 28 January 2026 00:39:43 +0000 (0:00:00.126) 0:00:05.326 ***** 2026-01-28 00:39:44.507011 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:39:44.507029 | orchestrator | 2026-01-28 00:39:44.507048 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-01-28 00:39:44.507066 | orchestrator | Wednesday 28 January 2026 00:39:43 +0000 (0:00:00.109) 0:00:05.436 ***** 2026-01-28 00:39:44.507085 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:39:44.507131 | orchestrator | 2026-01-28 00:39:44.507150 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-01-28 00:39:44.507170 | orchestrator | Wednesday 28 January 2026 00:39:44 +0000 (0:00:00.666) 0:00:06.102 ***** 2026-01-28 00:39:44.507223 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:39:44.507243 | orchestrator | 2026-01-28 00:39:44.507262 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:39:44.507281 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:39:44.507301 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:39:44.507319 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:39:44.507338 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:39:44.507357 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:39:44.507376 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:39:44.507395 | orchestrator | 2026-01-28 00:39:44.507414 | orchestrator | 2026-01-28 00:39:44.507433 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:39:44.507445 | orchestrator | Wednesday 28 January 2026 00:39:44 +0000 (0:00:00.041) 0:00:06.144 ***** 2026-01-28 00:39:44.507456 | orchestrator | =============================================================================== 2026-01-28 00:39:44.507467 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.45s 2026-01-28 00:39:44.507478 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.84s 2026-01-28 00:39:44.507493 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.64s 2026-01-28 00:39:44.843765 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-01-28 00:39:57.013350 | orchestrator | 2026-01-28 00:39:57 | INFO  | Task e38473a5-4e9b-4894-849e-e4333b962412 (wait-for-connection) was prepared for execution. 2026-01-28 00:39:57.013478 | orchestrator | 2026-01-28 00:39:57 | INFO  | It takes a moment until task e38473a5-4e9b-4894-849e-e4333b962412 (wait-for-connection) has been started and output is visible here. 2026-01-28 00:40:13.351611 | orchestrator | 2026-01-28 00:40:13.351720 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-01-28 00:40:13.351736 | orchestrator | 2026-01-28 00:40:13.351749 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-01-28 00:40:13.351760 | orchestrator | Wednesday 28 January 2026 00:40:01 +0000 (0:00:00.216) 0:00:00.216 ***** 2026-01-28 00:40:13.351772 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:40:13.351785 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:40:13.351796 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:40:13.351807 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:40:13.351818 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:40:13.351829 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:40:13.351840 | orchestrator | 2026-01-28 00:40:13.351850 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:40:13.351862 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:40:13.351875 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:40:13.351886 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:40:13.351897 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:40:13.351908 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:40:13.351919 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:40:13.351930 | orchestrator | 2026-01-28 00:40:13.351941 | orchestrator | 2026-01-28 00:40:13.351951 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:40:13.351962 | orchestrator | Wednesday 28 January 2026 00:40:12 +0000 (0:00:11.687) 0:00:11.903 ***** 2026-01-28 00:40:13.351973 | orchestrator | =============================================================================== 2026-01-28 00:40:13.351984 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.69s 2026-01-28 00:40:13.713671 | orchestrator | + osism apply hddtemp 2026-01-28 00:40:25.830828 | orchestrator | 2026-01-28 00:40:25 | INFO  | Task c9e4b864-1412-42a4-9466-7a5359de5d2e (hddtemp) was prepared for execution. 2026-01-28 00:40:25.830932 | orchestrator | 2026-01-28 00:40:25 | INFO  | It takes a moment until task c9e4b864-1412-42a4-9466-7a5359de5d2e (hddtemp) has been started and output is visible here. 2026-01-28 00:40:56.119426 | orchestrator | 2026-01-28 00:40:56.119484 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-01-28 00:40:56.119497 | orchestrator | 2026-01-28 00:40:56.119509 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-01-28 00:40:56.119520 | orchestrator | Wednesday 28 January 2026 00:40:30 +0000 (0:00:00.279) 0:00:00.279 ***** 2026-01-28 00:40:56.119532 | orchestrator | ok: [testbed-manager] 2026-01-28 00:40:56.119544 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:40:56.119555 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:40:56.119566 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:40:56.119577 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:40:56.119588 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:40:56.119599 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:40:56.119610 | orchestrator | 2026-01-28 00:40:56.119621 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-01-28 00:40:56.119631 | orchestrator | Wednesday 28 January 2026 00:40:30 +0000 (0:00:00.786) 0:00:01.066 ***** 2026-01-28 00:40:56.119664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:40:56.119678 | orchestrator | 2026-01-28 00:40:56.119690 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-01-28 00:40:56.119700 | orchestrator | Wednesday 28 January 2026 00:40:32 +0000 (0:00:01.382) 0:00:02.448 ***** 2026-01-28 00:40:56.119711 | orchestrator | ok: [testbed-manager] 2026-01-28 00:40:56.119723 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:40:56.119733 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:40:56.119744 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:40:56.119755 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:40:56.119766 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:40:56.119776 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:40:56.119787 | orchestrator | 2026-01-28 00:40:56.119798 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-01-28 00:40:56.119809 | orchestrator | Wednesday 28 January 2026 00:40:34 +0000 (0:00:02.281) 0:00:04.730 ***** 2026-01-28 00:40:56.119819 | orchestrator | changed: [testbed-manager] 2026-01-28 00:40:56.119831 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:40:56.119842 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:40:56.119853 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:40:56.119863 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:40:56.119874 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:40:56.119885 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:40:56.119896 | orchestrator | 2026-01-28 00:40:56.119917 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-01-28 00:40:56.119928 | orchestrator | Wednesday 28 January 2026 00:40:35 +0000 (0:00:01.099) 0:00:05.829 ***** 2026-01-28 00:40:56.119939 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:40:56.119950 | orchestrator | ok: [testbed-manager] 2026-01-28 00:40:56.119961 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:40:56.119972 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:40:56.119982 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:40:56.119993 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:40:56.120004 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:40:56.120016 | orchestrator | 2026-01-28 00:40:56.120029 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-01-28 00:40:56.120041 | orchestrator | Wednesday 28 January 2026 00:40:37 +0000 (0:00:01.793) 0:00:07.623 ***** 2026-01-28 00:40:56.120054 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:40:56.120066 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:40:56.120080 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:40:56.120092 | orchestrator | changed: [testbed-manager] 2026-01-28 00:40:56.120105 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:40:56.120155 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:40:56.120169 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:40:56.120182 | orchestrator | 2026-01-28 00:40:56.120195 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-01-28 00:40:56.120206 | orchestrator | Wednesday 28 January 2026 00:40:38 +0000 (0:00:00.746) 0:00:08.369 ***** 2026-01-28 00:40:56.120217 | orchestrator | changed: [testbed-manager] 2026-01-28 00:40:56.120228 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:40:56.120238 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:40:56.120249 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:40:56.120260 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:40:56.120270 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:40:56.120281 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:40:56.120292 | orchestrator | 2026-01-28 00:40:56.120302 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-01-28 00:40:56.120313 | orchestrator | Wednesday 28 January 2026 00:40:52 +0000 (0:00:14.220) 0:00:22.590 ***** 2026-01-28 00:40:56.120324 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:40:56.120348 | orchestrator | 2026-01-28 00:40:56.120368 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-01-28 00:40:56.120388 | orchestrator | Wednesday 28 January 2026 00:40:53 +0000 (0:00:01.203) 0:00:23.794 ***** 2026-01-28 00:40:56.120406 | orchestrator | changed: [testbed-manager] 2026-01-28 00:40:56.120425 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:40:56.120437 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:40:56.120448 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:40:56.120459 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:40:56.120469 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:40:56.120480 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:40:56.120491 | orchestrator | 2026-01-28 00:40:56.120501 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:40:56.120513 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:40:56.120537 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-28 00:40:56.120549 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-28 00:40:56.120560 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-28 00:40:56.120571 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-28 00:40:56.120581 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-28 00:40:56.120592 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-28 00:40:56.120606 | orchestrator | 2026-01-28 00:40:56.120622 | orchestrator | 2026-01-28 00:40:56.120633 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:40:56.120644 | orchestrator | Wednesday 28 January 2026 00:40:55 +0000 (0:00:02.046) 0:00:25.840 ***** 2026-01-28 00:40:56.120655 | orchestrator | =============================================================================== 2026-01-28 00:40:56.120666 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.22s 2026-01-28 00:40:56.120676 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.28s 2026-01-28 00:40:56.120687 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.05s 2026-01-28 00:40:56.120697 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.79s 2026-01-28 00:40:56.120708 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.38s 2026-01-28 00:40:56.120719 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.20s 2026-01-28 00:40:56.120729 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.10s 2026-01-28 00:40:56.120745 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.79s 2026-01-28 00:40:56.120756 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.75s 2026-01-28 00:40:56.383736 | orchestrator | ++ semver 9.5.0 7.1.1 2026-01-28 00:40:56.422347 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-28 00:40:56.422425 | orchestrator | + sudo systemctl restart manager.service 2026-01-28 00:41:09.830998 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-01-28 00:41:09.831104 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-01-28 00:41:09.831187 | orchestrator | + local max_attempts=60 2026-01-28 00:41:09.831202 | orchestrator | + local name=ceph-ansible 2026-01-28 00:41:09.831212 | orchestrator | + local attempt_num=1 2026-01-28 00:41:09.831222 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-28 00:41:09.866841 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-28 00:41:09.866931 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-28 00:41:09.866946 | orchestrator | + sleep 5 2026-01-28 00:41:14.872639 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-28 00:41:14.949937 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-28 00:41:14.950179 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-28 00:41:14.950207 | orchestrator | + sleep 5 2026-01-28 00:41:19.953293 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-28 00:41:19.987556 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-28 00:41:19.987669 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-28 00:41:19.987685 | orchestrator | + sleep 5 2026-01-28 00:41:24.992265 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-28 00:41:25.029340 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-28 00:41:25.029459 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-28 00:41:25.029481 | orchestrator | + sleep 5 2026-01-28 00:41:30.033738 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-28 00:41:30.064552 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-28 00:41:30.064646 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-28 00:41:30.064664 | orchestrator | + sleep 5 2026-01-28 00:41:35.069347 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-28 00:41:35.113814 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-28 00:41:35.113916 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-28 00:41:35.113934 | orchestrator | + sleep 5 2026-01-28 00:41:40.118781 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-28 00:41:40.166553 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-01-28 00:41:40.166641 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-28 00:41:40.166663 | orchestrator | + sleep 5 2026-01-28 00:41:45.172095 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-28 00:41:45.219692 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-28 00:41:45.219889 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-28 00:41:45.219998 | orchestrator | + sleep 5 2026-01-28 00:41:50.221507 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-28 00:41:50.257257 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-28 00:41:50.257351 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-28 00:41:50.257367 | orchestrator | + sleep 5 2026-01-28 00:41:55.261075 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-28 00:41:55.306570 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-28 00:41:55.306663 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-28 00:41:55.306678 | orchestrator | + sleep 5 2026-01-28 00:42:00.312298 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-28 00:42:00.358682 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-28 00:42:00.358779 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-28 00:42:00.358796 | orchestrator | + sleep 5 2026-01-28 00:42:05.364184 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-28 00:42:05.404589 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-28 00:42:05.404665 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-28 00:42:05.404675 | orchestrator | + sleep 5 2026-01-28 00:42:10.410460 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-28 00:42:10.458722 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-01-28 00:42:10.458835 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-01-28 00:42:10.458852 | orchestrator | + sleep 5 2026-01-28 00:42:15.463075 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-01-28 00:42:15.503691 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-28 00:42:15.504011 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-01-28 00:42:15.505423 | orchestrator | + local max_attempts=60 2026-01-28 00:42:15.505452 | orchestrator | + local name=kolla-ansible 2026-01-28 00:42:15.505461 | orchestrator | + local attempt_num=1 2026-01-28 00:42:15.505566 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-01-28 00:42:15.537052 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-28 00:42:15.537122 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-01-28 00:42:15.537175 | orchestrator | + local max_attempts=60 2026-01-28 00:42:15.537186 | orchestrator | + local name=osism-ansible 2026-01-28 00:42:15.537194 | orchestrator | + local attempt_num=1 2026-01-28 00:42:15.537796 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-01-28 00:42:15.597640 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-01-28 00:42:15.597767 | orchestrator | + [[ true == \t\r\u\e ]] 2026-01-28 00:42:15.597796 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-01-28 00:42:15.795565 | orchestrator | ARA in ceph-ansible already disabled. 2026-01-28 00:42:15.944820 | orchestrator | ARA in kolla-ansible already disabled. 2026-01-28 00:42:16.094866 | orchestrator | ARA in osism-ansible already disabled. 2026-01-28 00:42:16.255427 | orchestrator | ARA in osism-kubernetes already disabled. 2026-01-28 00:42:16.255705 | orchestrator | + osism apply gather-facts 2026-01-28 00:42:28.297955 | orchestrator | 2026-01-28 00:42:28 | INFO  | Task b7a7e747-42fc-43b3-9c29-4293cbeedb52 (gather-facts) was prepared for execution. 2026-01-28 00:42:28.298198 | orchestrator | 2026-01-28 00:42:28 | INFO  | It takes a moment until task b7a7e747-42fc-43b3-9c29-4293cbeedb52 (gather-facts) has been started and output is visible here. 2026-01-28 00:42:40.779665 | orchestrator | 2026-01-28 00:42:40.779772 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-28 00:42:40.779791 | orchestrator | 2026-01-28 00:42:40.779804 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-28 00:42:40.779816 | orchestrator | Wednesday 28 January 2026 00:42:31 +0000 (0:00:00.169) 0:00:00.169 ***** 2026-01-28 00:42:40.779828 | orchestrator | ok: [testbed-manager] 2026-01-28 00:42:40.779842 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:42:40.779854 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:42:40.779865 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:42:40.779876 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:42:40.779887 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:42:40.779898 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:42:40.779914 | orchestrator | 2026-01-28 00:42:40.779932 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-28 00:42:40.779950 | orchestrator | 2026-01-28 00:42:40.779968 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-28 00:42:40.779987 | orchestrator | Wednesday 28 January 2026 00:42:39 +0000 (0:00:08.194) 0:00:08.363 ***** 2026-01-28 00:42:40.780005 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:42:40.780027 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:42:40.780040 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:42:40.780051 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:42:40.780061 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:42:40.780072 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:42:40.780083 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:42:40.780093 | orchestrator | 2026-01-28 00:42:40.780104 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:42:40.780116 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-28 00:42:40.780127 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-28 00:42:40.780198 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-28 00:42:40.780212 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-28 00:42:40.780225 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-28 00:42:40.780264 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-28 00:42:40.780277 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-28 00:42:40.780290 | orchestrator | 2026-01-28 00:42:40.780303 | orchestrator | 2026-01-28 00:42:40.780316 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:42:40.780328 | orchestrator | Wednesday 28 January 2026 00:42:40 +0000 (0:00:00.499) 0:00:08.862 ***** 2026-01-28 00:42:40.780341 | orchestrator | =============================================================================== 2026-01-28 00:42:40.780353 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.19s 2026-01-28 00:42:40.780366 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-01-28 00:42:41.054898 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-01-28 00:42:41.072712 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-01-28 00:42:41.094873 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-01-28 00:42:41.104547 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-01-28 00:42:41.112664 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-01-28 00:42:41.123800 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-01-28 00:42:41.133537 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-01-28 00:42:41.144207 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-01-28 00:42:41.160114 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-01-28 00:42:41.173763 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-01-28 00:42:41.190409 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-01-28 00:42:41.203663 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-01-28 00:42:41.214759 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-01-28 00:42:41.226663 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-01-28 00:42:41.239201 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-01-28 00:42:41.250684 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-01-28 00:42:41.261350 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-01-28 00:42:41.273047 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-01-28 00:42:41.285187 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-01-28 00:42:41.298974 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-01-28 00:42:41.311089 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-01-28 00:42:41.322927 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-01-28 00:42:41.336593 | orchestrator | + [[ false == \t\r\u\e ]] 2026-01-28 00:42:41.626150 | orchestrator | ok: Runtime: 0:24:27.124172 2026-01-28 00:42:41.721930 | 2026-01-28 00:42:41.722087 | TASK [Deploy services] 2026-01-28 00:42:42.254542 | orchestrator | skipping: Conditional result was False 2026-01-28 00:42:42.273285 | 2026-01-28 00:42:42.273479 | TASK [Deploy in a nutshell] 2026-01-28 00:42:42.990641 | orchestrator | + set -e 2026-01-28 00:42:42.990840 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-01-28 00:42:42.990886 | orchestrator | ++ export INTERACTIVE=false 2026-01-28 00:42:42.990918 | orchestrator | ++ INTERACTIVE=false 2026-01-28 00:42:42.990933 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-01-28 00:42:42.990945 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-01-28 00:42:42.990959 | orchestrator | + source /opt/manager-vars.sh 2026-01-28 00:42:42.991004 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-01-28 00:42:42.991047 | orchestrator | ++ NUMBER_OF_NODES=6 2026-01-28 00:42:42.991062 | orchestrator | ++ export CEPH_VERSION=reef 2026-01-28 00:42:42.991078 | orchestrator | ++ CEPH_VERSION=reef 2026-01-28 00:42:42.991090 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-01-28 00:42:42.992303 | orchestrator | 2026-01-28 00:42:42.992336 | orchestrator | # PULL IMAGES 2026-01-28 00:42:42.992348 | orchestrator | 2026-01-28 00:42:42.992359 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-01-28 00:42:42.992381 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-01-28 00:42:42.992392 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-01-28 00:42:42.992408 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-01-28 00:42:42.992419 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-01-28 00:42:42.992430 | orchestrator | ++ export ARA=false 2026-01-28 00:42:42.992441 | orchestrator | ++ ARA=false 2026-01-28 00:42:42.992457 | orchestrator | ++ export DEPLOY_MODE=manager 2026-01-28 00:42:42.992468 | orchestrator | ++ DEPLOY_MODE=manager 2026-01-28 00:42:42.992479 | orchestrator | ++ export TEMPEST=true 2026-01-28 00:42:42.992490 | orchestrator | ++ TEMPEST=true 2026-01-28 00:42:42.992502 | orchestrator | ++ export IS_ZUUL=true 2026-01-28 00:42:42.992512 | orchestrator | ++ IS_ZUUL=true 2026-01-28 00:42:42.992523 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-01-28 00:42:42.992535 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-01-28 00:42:42.992546 | orchestrator | ++ export EXTERNAL_API=false 2026-01-28 00:42:42.992557 | orchestrator | ++ EXTERNAL_API=false 2026-01-28 00:42:42.992567 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-01-28 00:42:42.992579 | orchestrator | ++ IMAGE_USER=ubuntu 2026-01-28 00:42:42.992590 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-01-28 00:42:42.992601 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-01-28 00:42:42.992612 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-01-28 00:42:42.992630 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-01-28 00:42:42.992642 | orchestrator | + echo 2026-01-28 00:42:42.992653 | orchestrator | + echo '# PULL IMAGES' 2026-01-28 00:42:42.992664 | orchestrator | + echo 2026-01-28 00:42:42.992778 | orchestrator | ++ semver 9.5.0 7.0.0 2026-01-28 00:42:43.050348 | orchestrator | + [[ 1 -ge 0 ]] 2026-01-28 00:42:43.050474 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-01-28 00:42:44.827658 | orchestrator | 2026-01-28 00:42:44 | INFO  | Trying to run play pull-images in environment custom 2026-01-28 00:42:54.992027 | orchestrator | 2026-01-28 00:42:54 | INFO  | Task f7732b7f-d5e7-4387-afe7-1643e9660a52 (pull-images) was prepared for execution. 2026-01-28 00:42:54.992220 | orchestrator | 2026-01-28 00:42:54 | INFO  | Task f7732b7f-d5e7-4387-afe7-1643e9660a52 is running in background. No more output. Check ARA for logs. 2026-01-28 00:42:58.366513 | orchestrator | 2026-01-28 00:42:58 | INFO  | Trying to run play wipe-partitions in environment custom 2026-01-28 00:43:08.579841 | orchestrator | 2026-01-28 00:43:08 | INFO  | Task 23f12662-522d-4a1c-a8a3-b60b87387afe (wipe-partitions) was prepared for execution. 2026-01-28 00:43:08.579948 | orchestrator | 2026-01-28 00:43:08 | INFO  | It takes a moment until task 23f12662-522d-4a1c-a8a3-b60b87387afe (wipe-partitions) has been started and output is visible here. 2026-01-28 00:43:21.529810 | orchestrator | 2026-01-28 00:43:21.529943 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-01-28 00:43:21.529966 | orchestrator | 2026-01-28 00:43:21.529983 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-01-28 00:43:21.530007 | orchestrator | Wednesday 28 January 2026 00:43:12 +0000 (0:00:00.119) 0:00:00.119 ***** 2026-01-28 00:43:21.530109 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:43:21.530130 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:43:21.530201 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:43:21.530217 | orchestrator | 2026-01-28 00:43:21.530233 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-01-28 00:43:21.530283 | orchestrator | Wednesday 28 January 2026 00:43:13 +0000 (0:00:00.712) 0:00:00.832 ***** 2026-01-28 00:43:21.530300 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:43:21.530318 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:43:21.530336 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:43:21.530358 | orchestrator | 2026-01-28 00:43:21.530378 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-01-28 00:43:21.530396 | orchestrator | Wednesday 28 January 2026 00:43:13 +0000 (0:00:00.346) 0:00:01.178 ***** 2026-01-28 00:43:21.530414 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:43:21.530433 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:43:21.530451 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:43:21.530468 | orchestrator | 2026-01-28 00:43:21.530483 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-01-28 00:43:21.530497 | orchestrator | Wednesday 28 January 2026 00:43:14 +0000 (0:00:00.608) 0:00:01.787 ***** 2026-01-28 00:43:21.530513 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:43:21.530528 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:43:21.530543 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:43:21.530557 | orchestrator | 2026-01-28 00:43:21.530572 | orchestrator | TASK [Check device availability] *********************************************** 2026-01-28 00:43:21.530587 | orchestrator | Wednesday 28 January 2026 00:43:14 +0000 (0:00:00.239) 0:00:02.026 ***** 2026-01-28 00:43:21.530601 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-28 00:43:21.530621 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-28 00:43:21.530636 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-28 00:43:21.530651 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-28 00:43:21.530666 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-28 00:43:21.530679 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-28 00:43:21.530693 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-28 00:43:21.530706 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-28 00:43:21.530720 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-28 00:43:21.530733 | orchestrator | 2026-01-28 00:43:21.530747 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-01-28 00:43:21.530761 | orchestrator | Wednesday 28 January 2026 00:43:15 +0000 (0:00:01.214) 0:00:03.241 ***** 2026-01-28 00:43:21.530775 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-01-28 00:43:21.530789 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-01-28 00:43:21.530802 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-01-28 00:43:21.530816 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-01-28 00:43:21.530830 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-01-28 00:43:21.530843 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-01-28 00:43:21.530857 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-01-28 00:43:21.530870 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-01-28 00:43:21.530883 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-01-28 00:43:21.530897 | orchestrator | 2026-01-28 00:43:21.530910 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-01-28 00:43:21.530924 | orchestrator | Wednesday 28 January 2026 00:43:17 +0000 (0:00:01.665) 0:00:04.906 ***** 2026-01-28 00:43:21.530937 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-01-28 00:43:21.530951 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-01-28 00:43:21.530964 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-01-28 00:43:21.530977 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-01-28 00:43:21.530991 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-01-28 00:43:21.531004 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-01-28 00:43:21.531018 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-01-28 00:43:21.531031 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-01-28 00:43:21.531062 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-01-28 00:43:21.531076 | orchestrator | 2026-01-28 00:43:21.531090 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-01-28 00:43:21.531104 | orchestrator | Wednesday 28 January 2026 00:43:19 +0000 (0:00:02.316) 0:00:07.223 ***** 2026-01-28 00:43:21.531117 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:43:21.531131 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:43:21.531162 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:43:21.531176 | orchestrator | 2026-01-28 00:43:21.531190 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-01-28 00:43:21.531203 | orchestrator | Wednesday 28 January 2026 00:43:20 +0000 (0:00:00.659) 0:00:07.882 ***** 2026-01-28 00:43:21.531217 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:43:21.531230 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:43:21.531243 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:43:21.531257 | orchestrator | 2026-01-28 00:43:21.531270 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:43:21.531286 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:43:21.531301 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:43:21.531338 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:43:21.531352 | orchestrator | 2026-01-28 00:43:21.531365 | orchestrator | 2026-01-28 00:43:21.531379 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:43:21.531391 | orchestrator | Wednesday 28 January 2026 00:43:21 +0000 (0:00:00.686) 0:00:08.569 ***** 2026-01-28 00:43:21.531406 | orchestrator | =============================================================================== 2026-01-28 00:43:21.531419 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.32s 2026-01-28 00:43:21.531430 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.67s 2026-01-28 00:43:21.531444 | orchestrator | Check device availability ----------------------------------------------- 1.21s 2026-01-28 00:43:21.531457 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.71s 2026-01-28 00:43:21.531471 | orchestrator | Request device events from the kernel ----------------------------------- 0.69s 2026-01-28 00:43:21.531485 | orchestrator | Reload udev rules ------------------------------------------------------- 0.66s 2026-01-28 00:43:21.531498 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.61s 2026-01-28 00:43:21.531512 | orchestrator | Remove all rook related logical devices --------------------------------- 0.35s 2026-01-28 00:43:21.531525 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2026-01-28 00:43:33.591429 | orchestrator | 2026-01-28 00:43:33 | INFO  | Task 9c8fb215-72a2-4cd9-a109-32589bbc20dd (facts) was prepared for execution. 2026-01-28 00:43:33.591541 | orchestrator | 2026-01-28 00:43:33 | INFO  | It takes a moment until task 9c8fb215-72a2-4cd9-a109-32589bbc20dd (facts) has been started and output is visible here. 2026-01-28 00:43:46.471234 | orchestrator | 2026-01-28 00:43:46.471361 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-28 00:43:46.471379 | orchestrator | 2026-01-28 00:43:46.471392 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-28 00:43:46.471404 | orchestrator | Wednesday 28 January 2026 00:43:37 +0000 (0:00:00.256) 0:00:00.256 ***** 2026-01-28 00:43:46.471415 | orchestrator | ok: [testbed-manager] 2026-01-28 00:43:46.471428 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:43:46.471439 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:43:46.471450 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:43:46.471486 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:43:46.471497 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:43:46.471508 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:43:46.471519 | orchestrator | 2026-01-28 00:43:46.471530 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-28 00:43:46.471541 | orchestrator | Wednesday 28 January 2026 00:43:38 +0000 (0:00:01.057) 0:00:01.313 ***** 2026-01-28 00:43:46.471552 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:43:46.471563 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:43:46.471574 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:43:46.471585 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:43:46.471595 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:43:46.471606 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:43:46.471617 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:43:46.471628 | orchestrator | 2026-01-28 00:43:46.471639 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-28 00:43:46.471649 | orchestrator | 2026-01-28 00:43:46.471680 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-28 00:43:46.471691 | orchestrator | Wednesday 28 January 2026 00:43:39 +0000 (0:00:01.325) 0:00:02.639 ***** 2026-01-28 00:43:46.471702 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:43:46.471713 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:43:46.471724 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:43:46.471738 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:43:46.471749 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:43:46.471762 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:43:46.471775 | orchestrator | ok: [testbed-manager] 2026-01-28 00:43:46.471787 | orchestrator | 2026-01-28 00:43:46.471800 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-28 00:43:46.471812 | orchestrator | 2026-01-28 00:43:46.471825 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-28 00:43:46.471838 | orchestrator | Wednesday 28 January 2026 00:43:45 +0000 (0:00:05.469) 0:00:08.108 ***** 2026-01-28 00:43:46.471850 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:43:46.471862 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:43:46.471874 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:43:46.471886 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:43:46.471898 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:43:46.471911 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:43:46.471923 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:43:46.471935 | orchestrator | 2026-01-28 00:43:46.471947 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:43:46.471959 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:43:46.471974 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:43:46.471999 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:43:46.472010 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:43:46.472021 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:43:46.472101 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:43:46.472114 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:43:46.472125 | orchestrator | 2026-01-28 00:43:46.472156 | orchestrator | 2026-01-28 00:43:46.472167 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:43:46.472188 | orchestrator | Wednesday 28 January 2026 00:43:46 +0000 (0:00:00.564) 0:00:08.673 ***** 2026-01-28 00:43:46.472199 | orchestrator | =============================================================================== 2026-01-28 00:43:46.472210 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.47s 2026-01-28 00:43:46.472220 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.33s 2026-01-28 00:43:46.472231 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.06s 2026-01-28 00:43:46.472242 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-01-28 00:43:49.022438 | orchestrator | 2026-01-28 00:43:49 | INFO  | Task 9cfbb80e-3b25-4bf2-a4b6-fae92c8f5bea (ceph-configure-lvm-volumes) was prepared for execution. 2026-01-28 00:43:49.022542 | orchestrator | 2026-01-28 00:43:49 | INFO  | It takes a moment until task 9cfbb80e-3b25-4bf2-a4b6-fae92c8f5bea (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-01-28 00:44:01.717345 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-28 00:44:01.717454 | orchestrator | 2.16.14 2026-01-28 00:44:01.717471 | orchestrator | 2026-01-28 00:44:01.717484 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-28 00:44:01.717496 | orchestrator | 2026-01-28 00:44:01.717508 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-28 00:44:01.717520 | orchestrator | Wednesday 28 January 2026 00:43:53 +0000 (0:00:00.349) 0:00:00.349 ***** 2026-01-28 00:44:01.717531 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-28 00:44:01.717542 | orchestrator | 2026-01-28 00:44:01.717554 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-28 00:44:01.717565 | orchestrator | Wednesday 28 January 2026 00:43:53 +0000 (0:00:00.255) 0:00:00.605 ***** 2026-01-28 00:44:01.717576 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:44:01.717587 | orchestrator | 2026-01-28 00:44:01.717598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:01.717609 | orchestrator | Wednesday 28 January 2026 00:43:53 +0000 (0:00:00.234) 0:00:00.839 ***** 2026-01-28 00:44:01.717620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-28 00:44:01.717642 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-28 00:44:01.717654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-28 00:44:01.717665 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-28 00:44:01.717676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-28 00:44:01.717687 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-28 00:44:01.717697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-28 00:44:01.717708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-28 00:44:01.717719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-28 00:44:01.717730 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-28 00:44:01.717741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-28 00:44:01.717751 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-28 00:44:01.717762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-28 00:44:01.717773 | orchestrator | 2026-01-28 00:44:01.717784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:01.717795 | orchestrator | Wednesday 28 January 2026 00:43:54 +0000 (0:00:00.598) 0:00:01.437 ***** 2026-01-28 00:44:01.717827 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:01.717839 | orchestrator | 2026-01-28 00:44:01.717850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:01.717863 | orchestrator | Wednesday 28 January 2026 00:43:54 +0000 (0:00:00.207) 0:00:01.645 ***** 2026-01-28 00:44:01.717876 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:01.717889 | orchestrator | 2026-01-28 00:44:01.717901 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:01.717915 | orchestrator | Wednesday 28 January 2026 00:43:54 +0000 (0:00:00.210) 0:00:01.855 ***** 2026-01-28 00:44:01.717928 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:01.717940 | orchestrator | 2026-01-28 00:44:01.717951 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:01.717961 | orchestrator | Wednesday 28 January 2026 00:43:55 +0000 (0:00:00.201) 0:00:02.057 ***** 2026-01-28 00:44:01.717976 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:01.717987 | orchestrator | 2026-01-28 00:44:01.717998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:01.718009 | orchestrator | Wednesday 28 January 2026 00:43:55 +0000 (0:00:00.196) 0:00:02.254 ***** 2026-01-28 00:44:01.718162 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:01.718178 | orchestrator | 2026-01-28 00:44:01.718189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:01.718200 | orchestrator | Wednesday 28 January 2026 00:43:55 +0000 (0:00:00.224) 0:00:02.479 ***** 2026-01-28 00:44:01.718211 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:01.718222 | orchestrator | 2026-01-28 00:44:01.718233 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:01.718243 | orchestrator | Wednesday 28 January 2026 00:43:55 +0000 (0:00:00.208) 0:00:02.687 ***** 2026-01-28 00:44:01.718257 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:01.718275 | orchestrator | 2026-01-28 00:44:01.718294 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:01.718307 | orchestrator | Wednesday 28 January 2026 00:43:56 +0000 (0:00:00.221) 0:00:02.909 ***** 2026-01-28 00:44:01.718318 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:01.718328 | orchestrator | 2026-01-28 00:44:01.718339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:01.718350 | orchestrator | Wednesday 28 January 2026 00:43:56 +0000 (0:00:00.207) 0:00:03.116 ***** 2026-01-28 00:44:01.718361 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2) 2026-01-28 00:44:01.718373 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2) 2026-01-28 00:44:01.718383 | orchestrator | 2026-01-28 00:44:01.718394 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:01.718425 | orchestrator | Wednesday 28 January 2026 00:43:56 +0000 (0:00:00.451) 0:00:03.568 ***** 2026-01-28 00:44:01.718437 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1ddb64ab-bdcb-47d2-8e6e-2a5ea470c250) 2026-01-28 00:44:01.718455 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1ddb64ab-bdcb-47d2-8e6e-2a5ea470c250) 2026-01-28 00:44:01.718467 | orchestrator | 2026-01-28 00:44:01.718478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:01.718488 | orchestrator | Wednesday 28 January 2026 00:43:57 +0000 (0:00:00.796) 0:00:04.365 ***** 2026-01-28 00:44:01.718499 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b987a0a7-5a55-41a6-ab39-84821076e11d) 2026-01-28 00:44:01.718510 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b987a0a7-5a55-41a6-ab39-84821076e11d) 2026-01-28 00:44:01.718521 | orchestrator | 2026-01-28 00:44:01.718531 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:01.718542 | orchestrator | Wednesday 28 January 2026 00:43:58 +0000 (0:00:00.744) 0:00:05.109 ***** 2026-01-28 00:44:01.718564 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ac1a11ac-4fa1-43c2-9cb0-18dea5100f59) 2026-01-28 00:44:01.718575 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ac1a11ac-4fa1-43c2-9cb0-18dea5100f59) 2026-01-28 00:44:01.718586 | orchestrator | 2026-01-28 00:44:01.718597 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:01.718607 | orchestrator | Wednesday 28 January 2026 00:43:59 +0000 (0:00:01.064) 0:00:06.174 ***** 2026-01-28 00:44:01.718618 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-28 00:44:01.718629 | orchestrator | 2026-01-28 00:44:01.718639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:01.718650 | orchestrator | Wednesday 28 January 2026 00:43:59 +0000 (0:00:00.365) 0:00:06.539 ***** 2026-01-28 00:44:01.718661 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-28 00:44:01.718671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-28 00:44:01.718682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-28 00:44:01.718692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-28 00:44:01.718703 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-28 00:44:01.718714 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-28 00:44:01.718724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-28 00:44:01.718735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-28 00:44:01.718746 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-28 00:44:01.718756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-28 00:44:01.718767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-28 00:44:01.718778 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-28 00:44:01.718788 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-28 00:44:01.718799 | orchestrator | 2026-01-28 00:44:01.718810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:01.718820 | orchestrator | Wednesday 28 January 2026 00:44:00 +0000 (0:00:00.470) 0:00:07.009 ***** 2026-01-28 00:44:01.718831 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:01.718842 | orchestrator | 2026-01-28 00:44:01.718852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:01.718863 | orchestrator | Wednesday 28 January 2026 00:44:00 +0000 (0:00:00.238) 0:00:07.248 ***** 2026-01-28 00:44:01.718874 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:01.718884 | orchestrator | 2026-01-28 00:44:01.718895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:01.718905 | orchestrator | Wednesday 28 January 2026 00:44:00 +0000 (0:00:00.231) 0:00:07.479 ***** 2026-01-28 00:44:01.718916 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:01.718927 | orchestrator | 2026-01-28 00:44:01.718937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:01.718948 | orchestrator | Wednesday 28 January 2026 00:44:00 +0000 (0:00:00.212) 0:00:07.692 ***** 2026-01-28 00:44:01.718959 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:01.718969 | orchestrator | 2026-01-28 00:44:01.718980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:01.718990 | orchestrator | Wednesday 28 January 2026 00:44:01 +0000 (0:00:00.251) 0:00:07.944 ***** 2026-01-28 00:44:01.719008 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:01.719018 | orchestrator | 2026-01-28 00:44:01.719029 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:01.719040 | orchestrator | Wednesday 28 January 2026 00:44:01 +0000 (0:00:00.205) 0:00:08.150 ***** 2026-01-28 00:44:01.719050 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:01.719061 | orchestrator | 2026-01-28 00:44:01.719071 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:01.719082 | orchestrator | Wednesday 28 January 2026 00:44:01 +0000 (0:00:00.213) 0:00:08.363 ***** 2026-01-28 00:44:01.719093 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:01.719104 | orchestrator | 2026-01-28 00:44:01.719119 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:10.099125 | orchestrator | Wednesday 28 January 2026 00:44:01 +0000 (0:00:00.213) 0:00:08.577 ***** 2026-01-28 00:44:10.099273 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:10.099290 | orchestrator | 2026-01-28 00:44:10.099302 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:10.099313 | orchestrator | Wednesday 28 January 2026 00:44:01 +0000 (0:00:00.285) 0:00:08.862 ***** 2026-01-28 00:44:10.099323 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-28 00:44:10.099354 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-28 00:44:10.099365 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-28 00:44:10.099375 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-28 00:44:10.099385 | orchestrator | 2026-01-28 00:44:10.099396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:10.099405 | orchestrator | Wednesday 28 January 2026 00:44:03 +0000 (0:00:01.248) 0:00:10.111 ***** 2026-01-28 00:44:10.099415 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:10.099425 | orchestrator | 2026-01-28 00:44:10.099435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:10.099445 | orchestrator | Wednesday 28 January 2026 00:44:03 +0000 (0:00:00.239) 0:00:10.350 ***** 2026-01-28 00:44:10.099454 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:10.099464 | orchestrator | 2026-01-28 00:44:10.099474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:10.099485 | orchestrator | Wednesday 28 January 2026 00:44:03 +0000 (0:00:00.209) 0:00:10.560 ***** 2026-01-28 00:44:10.099496 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:10.099507 | orchestrator | 2026-01-28 00:44:10.099517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:10.099528 | orchestrator | Wednesday 28 January 2026 00:44:03 +0000 (0:00:00.216) 0:00:10.776 ***** 2026-01-28 00:44:10.099539 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:10.099550 | orchestrator | 2026-01-28 00:44:10.099561 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-28 00:44:10.099572 | orchestrator | Wednesday 28 January 2026 00:44:04 +0000 (0:00:00.229) 0:00:11.005 ***** 2026-01-28 00:44:10.099583 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-01-28 00:44:10.099594 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-01-28 00:44:10.099605 | orchestrator | 2026-01-28 00:44:10.099616 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-28 00:44:10.099627 | orchestrator | Wednesday 28 January 2026 00:44:04 +0000 (0:00:00.191) 0:00:11.197 ***** 2026-01-28 00:44:10.099638 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:10.099651 | orchestrator | 2026-01-28 00:44:10.099665 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-28 00:44:10.099677 | orchestrator | Wednesday 28 January 2026 00:44:04 +0000 (0:00:00.166) 0:00:11.364 ***** 2026-01-28 00:44:10.099691 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:10.099703 | orchestrator | 2026-01-28 00:44:10.099716 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-28 00:44:10.099728 | orchestrator | Wednesday 28 January 2026 00:44:04 +0000 (0:00:00.143) 0:00:11.508 ***** 2026-01-28 00:44:10.099765 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:10.099778 | orchestrator | 2026-01-28 00:44:10.099791 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-28 00:44:10.099803 | orchestrator | Wednesday 28 January 2026 00:44:04 +0000 (0:00:00.172) 0:00:11.680 ***** 2026-01-28 00:44:10.099816 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:44:10.099829 | orchestrator | 2026-01-28 00:44:10.099842 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-28 00:44:10.099854 | orchestrator | Wednesday 28 January 2026 00:44:04 +0000 (0:00:00.134) 0:00:11.815 ***** 2026-01-28 00:44:10.099868 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'}}) 2026-01-28 00:44:10.099881 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cf0ea652-88a6-5aa8-929a-ed9131fd0cef'}}) 2026-01-28 00:44:10.099893 | orchestrator | 2026-01-28 00:44:10.099905 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-28 00:44:10.099919 | orchestrator | Wednesday 28 January 2026 00:44:05 +0000 (0:00:00.185) 0:00:12.000 ***** 2026-01-28 00:44:10.099932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'}})  2026-01-28 00:44:10.099953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cf0ea652-88a6-5aa8-929a-ed9131fd0cef'}})  2026-01-28 00:44:10.099966 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:10.099978 | orchestrator | 2026-01-28 00:44:10.099991 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-28 00:44:10.100004 | orchestrator | Wednesday 28 January 2026 00:44:05 +0000 (0:00:00.159) 0:00:12.160 ***** 2026-01-28 00:44:10.100018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'}})  2026-01-28 00:44:10.100033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cf0ea652-88a6-5aa8-929a-ed9131fd0cef'}})  2026-01-28 00:44:10.100054 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:10.100072 | orchestrator | 2026-01-28 00:44:10.100091 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-28 00:44:10.100109 | orchestrator | Wednesday 28 January 2026 00:44:05 +0000 (0:00:00.453) 0:00:12.614 ***** 2026-01-28 00:44:10.100127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'}})  2026-01-28 00:44:10.100189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cf0ea652-88a6-5aa8-929a-ed9131fd0cef'}})  2026-01-28 00:44:10.100209 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:10.100229 | orchestrator | 2026-01-28 00:44:10.100248 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-28 00:44:10.100267 | orchestrator | Wednesday 28 January 2026 00:44:05 +0000 (0:00:00.187) 0:00:12.802 ***** 2026-01-28 00:44:10.100285 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:44:10.100297 | orchestrator | 2026-01-28 00:44:10.100308 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-28 00:44:10.100318 | orchestrator | Wednesday 28 January 2026 00:44:06 +0000 (0:00:00.143) 0:00:12.945 ***** 2026-01-28 00:44:10.100329 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:44:10.100340 | orchestrator | 2026-01-28 00:44:10.100350 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-28 00:44:10.100361 | orchestrator | Wednesday 28 January 2026 00:44:06 +0000 (0:00:00.148) 0:00:13.094 ***** 2026-01-28 00:44:10.100372 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:10.100382 | orchestrator | 2026-01-28 00:44:10.100393 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-28 00:44:10.100404 | orchestrator | Wednesday 28 January 2026 00:44:06 +0000 (0:00:00.138) 0:00:13.233 ***** 2026-01-28 00:44:10.100426 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:10.100437 | orchestrator | 2026-01-28 00:44:10.100448 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-28 00:44:10.100459 | orchestrator | Wednesday 28 January 2026 00:44:06 +0000 (0:00:00.132) 0:00:13.366 ***** 2026-01-28 00:44:10.100469 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:10.100480 | orchestrator | 2026-01-28 00:44:10.100491 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-28 00:44:10.100502 | orchestrator | Wednesday 28 January 2026 00:44:06 +0000 (0:00:00.147) 0:00:13.513 ***** 2026-01-28 00:44:10.100512 | orchestrator | ok: [testbed-node-3] => { 2026-01-28 00:44:10.100523 | orchestrator |  "ceph_osd_devices": { 2026-01-28 00:44:10.100534 | orchestrator |  "sdb": { 2026-01-28 00:44:10.100546 | orchestrator |  "osd_lvm_uuid": "12f0ff1a-fab7-5a0a-bd83-09da1ae004fe" 2026-01-28 00:44:10.100557 | orchestrator |  }, 2026-01-28 00:44:10.100568 | orchestrator |  "sdc": { 2026-01-28 00:44:10.100579 | orchestrator |  "osd_lvm_uuid": "cf0ea652-88a6-5aa8-929a-ed9131fd0cef" 2026-01-28 00:44:10.100590 | orchestrator |  } 2026-01-28 00:44:10.100601 | orchestrator |  } 2026-01-28 00:44:10.100612 | orchestrator | } 2026-01-28 00:44:10.100623 | orchestrator | 2026-01-28 00:44:10.100634 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-28 00:44:10.100652 | orchestrator | Wednesday 28 January 2026 00:44:06 +0000 (0:00:00.165) 0:00:13.678 ***** 2026-01-28 00:44:10.100663 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:10.100674 | orchestrator | 2026-01-28 00:44:10.100685 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-28 00:44:10.100695 | orchestrator | Wednesday 28 January 2026 00:44:06 +0000 (0:00:00.141) 0:00:13.820 ***** 2026-01-28 00:44:10.100706 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:10.100717 | orchestrator | 2026-01-28 00:44:10.100727 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-28 00:44:10.100738 | orchestrator | Wednesday 28 January 2026 00:44:07 +0000 (0:00:00.137) 0:00:13.957 ***** 2026-01-28 00:44:10.100749 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:44:10.100759 | orchestrator | 2026-01-28 00:44:10.100770 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-28 00:44:10.100781 | orchestrator | Wednesday 28 January 2026 00:44:07 +0000 (0:00:00.135) 0:00:14.092 ***** 2026-01-28 00:44:10.100791 | orchestrator | changed: [testbed-node-3] => { 2026-01-28 00:44:10.100802 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-28 00:44:10.100813 | orchestrator |  "ceph_osd_devices": { 2026-01-28 00:44:10.100823 | orchestrator |  "sdb": { 2026-01-28 00:44:10.100834 | orchestrator |  "osd_lvm_uuid": "12f0ff1a-fab7-5a0a-bd83-09da1ae004fe" 2026-01-28 00:44:10.100845 | orchestrator |  }, 2026-01-28 00:44:10.100856 | orchestrator |  "sdc": { 2026-01-28 00:44:10.100867 | orchestrator |  "osd_lvm_uuid": "cf0ea652-88a6-5aa8-929a-ed9131fd0cef" 2026-01-28 00:44:10.100878 | orchestrator |  } 2026-01-28 00:44:10.100889 | orchestrator |  }, 2026-01-28 00:44:10.100900 | orchestrator |  "lvm_volumes": [ 2026-01-28 00:44:10.100910 | orchestrator |  { 2026-01-28 00:44:10.100921 | orchestrator |  "data": "osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe", 2026-01-28 00:44:10.100932 | orchestrator |  "data_vg": "ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe" 2026-01-28 00:44:10.100943 | orchestrator |  }, 2026-01-28 00:44:10.100953 | orchestrator |  { 2026-01-28 00:44:10.100964 | orchestrator |  "data": "osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef", 2026-01-28 00:44:10.100975 | orchestrator |  "data_vg": "ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef" 2026-01-28 00:44:10.100985 | orchestrator |  } 2026-01-28 00:44:10.100996 | orchestrator |  ] 2026-01-28 00:44:10.101007 | orchestrator |  } 2026-01-28 00:44:10.101018 | orchestrator | } 2026-01-28 00:44:10.101036 | orchestrator | 2026-01-28 00:44:10.101047 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-28 00:44:10.101058 | orchestrator | Wednesday 28 January 2026 00:44:07 +0000 (0:00:00.499) 0:00:14.591 ***** 2026-01-28 00:44:10.101069 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-28 00:44:10.101079 | orchestrator | 2026-01-28 00:44:10.101090 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-28 00:44:10.101100 | orchestrator | 2026-01-28 00:44:10.101111 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-28 00:44:10.101122 | orchestrator | Wednesday 28 January 2026 00:44:09 +0000 (0:00:01.799) 0:00:16.391 ***** 2026-01-28 00:44:10.101164 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-28 00:44:10.101175 | orchestrator | 2026-01-28 00:44:10.101186 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-28 00:44:10.101197 | orchestrator | Wednesday 28 January 2026 00:44:09 +0000 (0:00:00.292) 0:00:16.683 ***** 2026-01-28 00:44:10.101208 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:44:10.101219 | orchestrator | 2026-01-28 00:44:10.101238 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:19.023612 | orchestrator | Wednesday 28 January 2026 00:44:10 +0000 (0:00:00.279) 0:00:16.963 ***** 2026-01-28 00:44:19.023726 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-28 00:44:19.023743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-28 00:44:19.023755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-28 00:44:19.023766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-28 00:44:19.023777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-28 00:44:19.023788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-28 00:44:19.023799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-28 00:44:19.023830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-28 00:44:19.023842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-28 00:44:19.023853 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-28 00:44:19.023864 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-28 00:44:19.023875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-28 00:44:19.023891 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-28 00:44:19.023903 | orchestrator | 2026-01-28 00:44:19.023915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:19.023926 | orchestrator | Wednesday 28 January 2026 00:44:10 +0000 (0:00:00.440) 0:00:17.404 ***** 2026-01-28 00:44:19.023937 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:19.023949 | orchestrator | 2026-01-28 00:44:19.023960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:19.023971 | orchestrator | Wednesday 28 January 2026 00:44:10 +0000 (0:00:00.200) 0:00:17.604 ***** 2026-01-28 00:44:19.023982 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:19.023993 | orchestrator | 2026-01-28 00:44:19.024004 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:19.024015 | orchestrator | Wednesday 28 January 2026 00:44:10 +0000 (0:00:00.202) 0:00:17.807 ***** 2026-01-28 00:44:19.024026 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:19.024037 | orchestrator | 2026-01-28 00:44:19.024048 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:19.024059 | orchestrator | Wednesday 28 January 2026 00:44:11 +0000 (0:00:00.222) 0:00:18.029 ***** 2026-01-28 00:44:19.024091 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:19.024103 | orchestrator | 2026-01-28 00:44:19.024114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:19.024124 | orchestrator | Wednesday 28 January 2026 00:44:11 +0000 (0:00:00.313) 0:00:18.343 ***** 2026-01-28 00:44:19.024180 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:19.024194 | orchestrator | 2026-01-28 00:44:19.024207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:19.024220 | orchestrator | Wednesday 28 January 2026 00:44:12 +0000 (0:00:00.776) 0:00:19.119 ***** 2026-01-28 00:44:19.024233 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:19.024245 | orchestrator | 2026-01-28 00:44:19.024259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:19.024272 | orchestrator | Wednesday 28 January 2026 00:44:12 +0000 (0:00:00.219) 0:00:19.339 ***** 2026-01-28 00:44:19.024284 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:19.024297 | orchestrator | 2026-01-28 00:44:19.024307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:19.024318 | orchestrator | Wednesday 28 January 2026 00:44:12 +0000 (0:00:00.241) 0:00:19.580 ***** 2026-01-28 00:44:19.024329 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:19.024340 | orchestrator | 2026-01-28 00:44:19.024351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:19.024361 | orchestrator | Wednesday 28 January 2026 00:44:12 +0000 (0:00:00.245) 0:00:19.825 ***** 2026-01-28 00:44:19.024372 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e) 2026-01-28 00:44:19.024384 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e) 2026-01-28 00:44:19.024395 | orchestrator | 2026-01-28 00:44:19.024406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:19.024417 | orchestrator | Wednesday 28 January 2026 00:44:13 +0000 (0:00:00.447) 0:00:20.273 ***** 2026-01-28 00:44:19.024427 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_73ccfbcc-79e9-4762-9da0-bda867b64772) 2026-01-28 00:44:19.024438 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_73ccfbcc-79e9-4762-9da0-bda867b64772) 2026-01-28 00:44:19.024449 | orchestrator | 2026-01-28 00:44:19.024460 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:19.024470 | orchestrator | Wednesday 28 January 2026 00:44:13 +0000 (0:00:00.461) 0:00:20.734 ***** 2026-01-28 00:44:19.024481 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_eefe2a4b-8f8c-4873-9530-ac9327ae5f1f) 2026-01-28 00:44:19.024492 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_eefe2a4b-8f8c-4873-9530-ac9327ae5f1f) 2026-01-28 00:44:19.024503 | orchestrator | 2026-01-28 00:44:19.024513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:19.024543 | orchestrator | Wednesday 28 January 2026 00:44:14 +0000 (0:00:00.423) 0:00:21.158 ***** 2026-01-28 00:44:19.024555 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f01b7d80-edbf-4a4b-9318-1b6b20cc249d) 2026-01-28 00:44:19.024566 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f01b7d80-edbf-4a4b-9318-1b6b20cc249d) 2026-01-28 00:44:19.024577 | orchestrator | 2026-01-28 00:44:19.024595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:19.024606 | orchestrator | Wednesday 28 January 2026 00:44:14 +0000 (0:00:00.455) 0:00:21.613 ***** 2026-01-28 00:44:19.024617 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-28 00:44:19.024628 | orchestrator | 2026-01-28 00:44:19.024639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:19.024649 | orchestrator | Wednesday 28 January 2026 00:44:15 +0000 (0:00:00.355) 0:00:21.969 ***** 2026-01-28 00:44:19.024660 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-28 00:44:19.024681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-28 00:44:19.024692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-28 00:44:19.024702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-28 00:44:19.024713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-28 00:44:19.024724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-28 00:44:19.024734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-28 00:44:19.024745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-28 00:44:19.024756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-28 00:44:19.024766 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-28 00:44:19.024777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-28 00:44:19.024788 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-28 00:44:19.024798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-28 00:44:19.024809 | orchestrator | 2026-01-28 00:44:19.024820 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:19.024831 | orchestrator | Wednesday 28 January 2026 00:44:15 +0000 (0:00:00.443) 0:00:22.413 ***** 2026-01-28 00:44:19.024841 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:19.024852 | orchestrator | 2026-01-28 00:44:19.024863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:19.024874 | orchestrator | Wednesday 28 January 2026 00:44:16 +0000 (0:00:00.801) 0:00:23.214 ***** 2026-01-28 00:44:19.024885 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:19.024896 | orchestrator | 2026-01-28 00:44:19.024906 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:19.024917 | orchestrator | Wednesday 28 January 2026 00:44:16 +0000 (0:00:00.232) 0:00:23.446 ***** 2026-01-28 00:44:19.024928 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:19.024939 | orchestrator | 2026-01-28 00:44:19.024950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:19.024960 | orchestrator | Wednesday 28 January 2026 00:44:16 +0000 (0:00:00.223) 0:00:23.670 ***** 2026-01-28 00:44:19.024971 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:19.024982 | orchestrator | 2026-01-28 00:44:19.024993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:19.025004 | orchestrator | Wednesday 28 January 2026 00:44:17 +0000 (0:00:00.234) 0:00:23.904 ***** 2026-01-28 00:44:19.025014 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:19.025025 | orchestrator | 2026-01-28 00:44:19.025036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:19.025047 | orchestrator | Wednesday 28 January 2026 00:44:17 +0000 (0:00:00.195) 0:00:24.100 ***** 2026-01-28 00:44:19.025057 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:19.025068 | orchestrator | 2026-01-28 00:44:19.025079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:19.025090 | orchestrator | Wednesday 28 January 2026 00:44:17 +0000 (0:00:00.234) 0:00:24.334 ***** 2026-01-28 00:44:19.025100 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:19.025111 | orchestrator | 2026-01-28 00:44:19.025122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:19.025156 | orchestrator | Wednesday 28 January 2026 00:44:17 +0000 (0:00:00.230) 0:00:24.565 ***** 2026-01-28 00:44:19.025168 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:19.025186 | orchestrator | 2026-01-28 00:44:19.025197 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:19.025207 | orchestrator | Wednesday 28 January 2026 00:44:17 +0000 (0:00:00.234) 0:00:24.800 ***** 2026-01-28 00:44:19.025218 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-28 00:44:19.025230 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-28 00:44:19.025241 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-28 00:44:19.025251 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-28 00:44:19.025262 | orchestrator | 2026-01-28 00:44:19.025273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:19.025284 | orchestrator | Wednesday 28 January 2026 00:44:18 +0000 (0:00:00.848) 0:00:25.648 ***** 2026-01-28 00:44:19.025294 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:26.334100 | orchestrator | 2026-01-28 00:44:26.334280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:26.334312 | orchestrator | Wednesday 28 January 2026 00:44:19 +0000 (0:00:00.236) 0:00:25.885 ***** 2026-01-28 00:44:26.334326 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:26.334339 | orchestrator | 2026-01-28 00:44:26.334350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:26.334379 | orchestrator | Wednesday 28 January 2026 00:44:19 +0000 (0:00:00.214) 0:00:26.100 ***** 2026-01-28 00:44:26.334392 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:26.334403 | orchestrator | 2026-01-28 00:44:26.334414 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:26.334425 | orchestrator | Wednesday 28 January 2026 00:44:19 +0000 (0:00:00.229) 0:00:26.330 ***** 2026-01-28 00:44:26.334436 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:26.334446 | orchestrator | 2026-01-28 00:44:26.334457 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-28 00:44:26.334468 | orchestrator | Wednesday 28 January 2026 00:44:20 +0000 (0:00:00.943) 0:00:27.273 ***** 2026-01-28 00:44:26.334479 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-01-28 00:44:26.334490 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-01-28 00:44:26.334501 | orchestrator | 2026-01-28 00:44:26.334512 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-28 00:44:26.334523 | orchestrator | Wednesday 28 January 2026 00:44:20 +0000 (0:00:00.273) 0:00:27.546 ***** 2026-01-28 00:44:26.334533 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:26.334544 | orchestrator | 2026-01-28 00:44:26.334557 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-28 00:44:26.334570 | orchestrator | Wednesday 28 January 2026 00:44:20 +0000 (0:00:00.178) 0:00:27.725 ***** 2026-01-28 00:44:26.334582 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:26.334594 | orchestrator | 2026-01-28 00:44:26.334606 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-28 00:44:26.334620 | orchestrator | Wednesday 28 January 2026 00:44:21 +0000 (0:00:00.178) 0:00:27.904 ***** 2026-01-28 00:44:26.334632 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:26.334644 | orchestrator | 2026-01-28 00:44:26.334656 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-28 00:44:26.334669 | orchestrator | Wednesday 28 January 2026 00:44:21 +0000 (0:00:00.138) 0:00:28.042 ***** 2026-01-28 00:44:26.334681 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:44:26.334692 | orchestrator | 2026-01-28 00:44:26.334703 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-28 00:44:26.334714 | orchestrator | Wednesday 28 January 2026 00:44:21 +0000 (0:00:00.141) 0:00:28.184 ***** 2026-01-28 00:44:26.334725 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e01643e5-7b60-5b49-bc8a-cfec0728964e'}}) 2026-01-28 00:44:26.334737 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ae2f77e7-beca-5176-aee2-b01d14f9def4'}}) 2026-01-28 00:44:26.334773 | orchestrator | 2026-01-28 00:44:26.334784 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-28 00:44:26.334795 | orchestrator | Wednesday 28 January 2026 00:44:21 +0000 (0:00:00.178) 0:00:28.362 ***** 2026-01-28 00:44:26.334806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e01643e5-7b60-5b49-bc8a-cfec0728964e'}})  2026-01-28 00:44:26.334819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ae2f77e7-beca-5176-aee2-b01d14f9def4'}})  2026-01-28 00:44:26.334830 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:26.334841 | orchestrator | 2026-01-28 00:44:26.334851 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-28 00:44:26.334862 | orchestrator | Wednesday 28 January 2026 00:44:21 +0000 (0:00:00.145) 0:00:28.507 ***** 2026-01-28 00:44:26.334873 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e01643e5-7b60-5b49-bc8a-cfec0728964e'}})  2026-01-28 00:44:26.334884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ae2f77e7-beca-5176-aee2-b01d14f9def4'}})  2026-01-28 00:44:26.334898 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:26.334917 | orchestrator | 2026-01-28 00:44:26.334935 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-28 00:44:26.334953 | orchestrator | Wednesday 28 January 2026 00:44:21 +0000 (0:00:00.164) 0:00:28.672 ***** 2026-01-28 00:44:26.334971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e01643e5-7b60-5b49-bc8a-cfec0728964e'}})  2026-01-28 00:44:26.334991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ae2f77e7-beca-5176-aee2-b01d14f9def4'}})  2026-01-28 00:44:26.335011 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:26.335030 | orchestrator | 2026-01-28 00:44:26.335049 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-28 00:44:26.335060 | orchestrator | Wednesday 28 January 2026 00:44:21 +0000 (0:00:00.161) 0:00:28.833 ***** 2026-01-28 00:44:26.335071 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:44:26.335081 | orchestrator | 2026-01-28 00:44:26.335092 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-28 00:44:26.335103 | orchestrator | Wednesday 28 January 2026 00:44:22 +0000 (0:00:00.151) 0:00:28.985 ***** 2026-01-28 00:44:26.335113 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:44:26.335124 | orchestrator | 2026-01-28 00:44:26.335164 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-28 00:44:26.335176 | orchestrator | Wednesday 28 January 2026 00:44:22 +0000 (0:00:00.147) 0:00:29.132 ***** 2026-01-28 00:44:26.335209 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:26.335220 | orchestrator | 2026-01-28 00:44:26.335231 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-28 00:44:26.335242 | orchestrator | Wednesday 28 January 2026 00:44:22 +0000 (0:00:00.432) 0:00:29.565 ***** 2026-01-28 00:44:26.335253 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:26.335264 | orchestrator | 2026-01-28 00:44:26.335282 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-28 00:44:26.335301 | orchestrator | Wednesday 28 January 2026 00:44:22 +0000 (0:00:00.147) 0:00:29.713 ***** 2026-01-28 00:44:26.335321 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:26.335340 | orchestrator | 2026-01-28 00:44:26.335358 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-28 00:44:26.335377 | orchestrator | Wednesday 28 January 2026 00:44:22 +0000 (0:00:00.127) 0:00:29.840 ***** 2026-01-28 00:44:26.335396 | orchestrator | ok: [testbed-node-4] => { 2026-01-28 00:44:26.335414 | orchestrator |  "ceph_osd_devices": { 2026-01-28 00:44:26.335435 | orchestrator |  "sdb": { 2026-01-28 00:44:26.335457 | orchestrator |  "osd_lvm_uuid": "e01643e5-7b60-5b49-bc8a-cfec0728964e" 2026-01-28 00:44:26.335478 | orchestrator |  }, 2026-01-28 00:44:26.335513 | orchestrator |  "sdc": { 2026-01-28 00:44:26.335545 | orchestrator |  "osd_lvm_uuid": "ae2f77e7-beca-5176-aee2-b01d14f9def4" 2026-01-28 00:44:26.335567 | orchestrator |  } 2026-01-28 00:44:26.335589 | orchestrator |  } 2026-01-28 00:44:26.335608 | orchestrator | } 2026-01-28 00:44:26.335624 | orchestrator | 2026-01-28 00:44:26.335636 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-28 00:44:26.335647 | orchestrator | Wednesday 28 January 2026 00:44:23 +0000 (0:00:00.163) 0:00:30.004 ***** 2026-01-28 00:44:26.335657 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:26.335668 | orchestrator | 2026-01-28 00:44:26.335679 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-28 00:44:26.335689 | orchestrator | Wednesday 28 January 2026 00:44:23 +0000 (0:00:00.175) 0:00:30.179 ***** 2026-01-28 00:44:26.335795 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:26.335808 | orchestrator | 2026-01-28 00:44:26.335819 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-28 00:44:26.335830 | orchestrator | Wednesday 28 January 2026 00:44:23 +0000 (0:00:00.163) 0:00:30.343 ***** 2026-01-28 00:44:26.335841 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:44:26.335852 | orchestrator | 2026-01-28 00:44:26.335863 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-28 00:44:26.335874 | orchestrator | Wednesday 28 January 2026 00:44:23 +0000 (0:00:00.137) 0:00:30.480 ***** 2026-01-28 00:44:26.335884 | orchestrator | changed: [testbed-node-4] => { 2026-01-28 00:44:26.335895 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-28 00:44:26.335907 | orchestrator |  "ceph_osd_devices": { 2026-01-28 00:44:26.335918 | orchestrator |  "sdb": { 2026-01-28 00:44:26.335935 | orchestrator |  "osd_lvm_uuid": "e01643e5-7b60-5b49-bc8a-cfec0728964e" 2026-01-28 00:44:26.335946 | orchestrator |  }, 2026-01-28 00:44:26.335957 | orchestrator |  "sdc": { 2026-01-28 00:44:26.335968 | orchestrator |  "osd_lvm_uuid": "ae2f77e7-beca-5176-aee2-b01d14f9def4" 2026-01-28 00:44:26.335979 | orchestrator |  } 2026-01-28 00:44:26.335990 | orchestrator |  }, 2026-01-28 00:44:26.336001 | orchestrator |  "lvm_volumes": [ 2026-01-28 00:44:26.336012 | orchestrator |  { 2026-01-28 00:44:26.336022 | orchestrator |  "data": "osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e", 2026-01-28 00:44:26.336034 | orchestrator |  "data_vg": "ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e" 2026-01-28 00:44:26.336045 | orchestrator |  }, 2026-01-28 00:44:26.336055 | orchestrator |  { 2026-01-28 00:44:26.336066 | orchestrator |  "data": "osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4", 2026-01-28 00:44:26.336118 | orchestrator |  "data_vg": "ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4" 2026-01-28 00:44:26.336163 | orchestrator |  } 2026-01-28 00:44:26.336176 | orchestrator |  ] 2026-01-28 00:44:26.336187 | orchestrator |  } 2026-01-28 00:44:26.336198 | orchestrator | } 2026-01-28 00:44:26.336209 | orchestrator | 2026-01-28 00:44:26.336220 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-28 00:44:26.336231 | orchestrator | Wednesday 28 January 2026 00:44:23 +0000 (0:00:00.218) 0:00:30.699 ***** 2026-01-28 00:44:26.336241 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-28 00:44:26.336252 | orchestrator | 2026-01-28 00:44:26.336263 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-01-28 00:44:26.336273 | orchestrator | 2026-01-28 00:44:26.336284 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-28 00:44:26.336295 | orchestrator | Wednesday 28 January 2026 00:44:25 +0000 (0:00:01.271) 0:00:31.970 ***** 2026-01-28 00:44:26.336313 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-28 00:44:26.336332 | orchestrator | 2026-01-28 00:44:26.336351 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-28 00:44:26.336382 | orchestrator | Wednesday 28 January 2026 00:44:25 +0000 (0:00:00.681) 0:00:32.651 ***** 2026-01-28 00:44:26.336400 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:44:26.336417 | orchestrator | 2026-01-28 00:44:26.336434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:26.336453 | orchestrator | Wednesday 28 January 2026 00:44:25 +0000 (0:00:00.208) 0:00:32.860 ***** 2026-01-28 00:44:26.336471 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-28 00:44:26.336488 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-28 00:44:26.336506 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-28 00:44:26.336523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-28 00:44:26.336542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-28 00:44:26.336577 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-28 00:44:33.962583 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-28 00:44:33.962662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-28 00:44:33.962673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-28 00:44:33.962681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-28 00:44:33.962688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-28 00:44:33.962696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-28 00:44:33.962703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-28 00:44:33.962711 | orchestrator | 2026-01-28 00:44:33.962719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:33.962727 | orchestrator | Wednesday 28 January 2026 00:44:26 +0000 (0:00:00.339) 0:00:33.199 ***** 2026-01-28 00:44:33.962734 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.962742 | orchestrator | 2026-01-28 00:44:33.962749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:33.962756 | orchestrator | Wednesday 28 January 2026 00:44:26 +0000 (0:00:00.167) 0:00:33.367 ***** 2026-01-28 00:44:33.962763 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.962771 | orchestrator | 2026-01-28 00:44:33.962778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:33.962785 | orchestrator | Wednesday 28 January 2026 00:44:26 +0000 (0:00:00.193) 0:00:33.561 ***** 2026-01-28 00:44:33.962792 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.962799 | orchestrator | 2026-01-28 00:44:33.962807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:33.962814 | orchestrator | Wednesday 28 January 2026 00:44:26 +0000 (0:00:00.161) 0:00:33.722 ***** 2026-01-28 00:44:33.962821 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.962828 | orchestrator | 2026-01-28 00:44:33.962836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:33.962843 | orchestrator | Wednesday 28 January 2026 00:44:27 +0000 (0:00:00.203) 0:00:33.926 ***** 2026-01-28 00:44:33.962850 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.962861 | orchestrator | 2026-01-28 00:44:33.962875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:33.962888 | orchestrator | Wednesday 28 January 2026 00:44:27 +0000 (0:00:00.200) 0:00:34.127 ***** 2026-01-28 00:44:33.962901 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.962914 | orchestrator | 2026-01-28 00:44:33.962942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:33.962955 | orchestrator | Wednesday 28 January 2026 00:44:27 +0000 (0:00:00.176) 0:00:34.303 ***** 2026-01-28 00:44:33.962989 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.963004 | orchestrator | 2026-01-28 00:44:33.963017 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:33.963030 | orchestrator | Wednesday 28 January 2026 00:44:27 +0000 (0:00:00.183) 0:00:34.487 ***** 2026-01-28 00:44:33.963042 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.963054 | orchestrator | 2026-01-28 00:44:33.963065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:33.963077 | orchestrator | Wednesday 28 January 2026 00:44:27 +0000 (0:00:00.240) 0:00:34.727 ***** 2026-01-28 00:44:33.963089 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2) 2026-01-28 00:44:33.963103 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2) 2026-01-28 00:44:33.963114 | orchestrator | 2026-01-28 00:44:33.963149 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:33.963162 | orchestrator | Wednesday 28 January 2026 00:44:28 +0000 (0:00:00.764) 0:00:35.492 ***** 2026-01-28 00:44:33.963175 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3b42e8f1-b37c-4f60-8295-3641607d148d) 2026-01-28 00:44:33.963186 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3b42e8f1-b37c-4f60-8295-3641607d148d) 2026-01-28 00:44:33.963199 | orchestrator | 2026-01-28 00:44:33.963208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:33.963215 | orchestrator | Wednesday 28 January 2026 00:44:29 +0000 (0:00:00.394) 0:00:35.886 ***** 2026-01-28 00:44:33.963223 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_aa542874-8b0a-406e-9706-56af76962c37) 2026-01-28 00:44:33.963230 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_aa542874-8b0a-406e-9706-56af76962c37) 2026-01-28 00:44:33.963237 | orchestrator | 2026-01-28 00:44:33.963244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:33.963251 | orchestrator | Wednesday 28 January 2026 00:44:29 +0000 (0:00:00.407) 0:00:36.294 ***** 2026-01-28 00:44:33.963259 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_28bc48d4-f1f3-45fc-825f-eba8771d5ae9) 2026-01-28 00:44:33.963266 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_28bc48d4-f1f3-45fc-825f-eba8771d5ae9) 2026-01-28 00:44:33.963273 | orchestrator | 2026-01-28 00:44:33.963280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:44:33.963287 | orchestrator | Wednesday 28 January 2026 00:44:29 +0000 (0:00:00.402) 0:00:36.696 ***** 2026-01-28 00:44:33.963295 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-28 00:44:33.963302 | orchestrator | 2026-01-28 00:44:33.963309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:33.963330 | orchestrator | Wednesday 28 January 2026 00:44:30 +0000 (0:00:00.325) 0:00:37.021 ***** 2026-01-28 00:44:33.963338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-28 00:44:33.963345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-28 00:44:33.963352 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-28 00:44:33.963359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-28 00:44:33.963367 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-28 00:44:33.963374 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-28 00:44:33.963381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-28 00:44:33.963388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-28 00:44:33.963403 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-28 00:44:33.963410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-28 00:44:33.963417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-28 00:44:33.963424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-28 00:44:33.963432 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-28 00:44:33.963439 | orchestrator | 2026-01-28 00:44:33.963446 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:33.963453 | orchestrator | Wednesday 28 January 2026 00:44:30 +0000 (0:00:00.369) 0:00:37.390 ***** 2026-01-28 00:44:33.963460 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.963467 | orchestrator | 2026-01-28 00:44:33.963474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:33.963482 | orchestrator | Wednesday 28 January 2026 00:44:30 +0000 (0:00:00.203) 0:00:37.594 ***** 2026-01-28 00:44:33.963489 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.963496 | orchestrator | 2026-01-28 00:44:33.963503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:33.963510 | orchestrator | Wednesday 28 January 2026 00:44:30 +0000 (0:00:00.196) 0:00:37.790 ***** 2026-01-28 00:44:33.963518 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.963530 | orchestrator | 2026-01-28 00:44:33.963542 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:33.963554 | orchestrator | Wednesday 28 January 2026 00:44:31 +0000 (0:00:00.163) 0:00:37.954 ***** 2026-01-28 00:44:33.963565 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.963577 | orchestrator | 2026-01-28 00:44:33.963589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:33.963602 | orchestrator | Wednesday 28 January 2026 00:44:31 +0000 (0:00:00.169) 0:00:38.124 ***** 2026-01-28 00:44:33.963610 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.963617 | orchestrator | 2026-01-28 00:44:33.963624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:33.963631 | orchestrator | Wednesday 28 January 2026 00:44:31 +0000 (0:00:00.212) 0:00:38.336 ***** 2026-01-28 00:44:33.963638 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.963645 | orchestrator | 2026-01-28 00:44:33.963652 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:33.963660 | orchestrator | Wednesday 28 January 2026 00:44:31 +0000 (0:00:00.533) 0:00:38.869 ***** 2026-01-28 00:44:33.963667 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.963674 | orchestrator | 2026-01-28 00:44:33.963681 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:33.963688 | orchestrator | Wednesday 28 January 2026 00:44:32 +0000 (0:00:00.204) 0:00:39.074 ***** 2026-01-28 00:44:33.963695 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.963702 | orchestrator | 2026-01-28 00:44:33.963709 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:33.963716 | orchestrator | Wednesday 28 January 2026 00:44:32 +0000 (0:00:00.211) 0:00:39.285 ***** 2026-01-28 00:44:33.963723 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-28 00:44:33.963730 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-28 00:44:33.963738 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-28 00:44:33.963745 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-28 00:44:33.963752 | orchestrator | 2026-01-28 00:44:33.963759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:33.963766 | orchestrator | Wednesday 28 January 2026 00:44:33 +0000 (0:00:00.743) 0:00:40.029 ***** 2026-01-28 00:44:33.963773 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.963780 | orchestrator | 2026-01-28 00:44:33.963792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:33.963805 | orchestrator | Wednesday 28 January 2026 00:44:33 +0000 (0:00:00.196) 0:00:40.225 ***** 2026-01-28 00:44:33.963813 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.963820 | orchestrator | 2026-01-28 00:44:33.963827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:33.963835 | orchestrator | Wednesday 28 January 2026 00:44:33 +0000 (0:00:00.216) 0:00:40.442 ***** 2026-01-28 00:44:33.963842 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.963849 | orchestrator | 2026-01-28 00:44:33.963856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:44:33.963863 | orchestrator | Wednesday 28 January 2026 00:44:33 +0000 (0:00:00.186) 0:00:40.628 ***** 2026-01-28 00:44:33.963870 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:33.963878 | orchestrator | 2026-01-28 00:44:33.963890 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-01-28 00:44:38.694680 | orchestrator | Wednesday 28 January 2026 00:44:33 +0000 (0:00:00.195) 0:00:40.824 ***** 2026-01-28 00:44:38.694771 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-01-28 00:44:38.694785 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-01-28 00:44:38.694796 | orchestrator | 2026-01-28 00:44:38.694806 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-01-28 00:44:38.694816 | orchestrator | Wednesday 28 January 2026 00:44:34 +0000 (0:00:00.159) 0:00:40.984 ***** 2026-01-28 00:44:38.694826 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:38.694836 | orchestrator | 2026-01-28 00:44:38.694846 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-01-28 00:44:38.694856 | orchestrator | Wednesday 28 January 2026 00:44:34 +0000 (0:00:00.141) 0:00:41.126 ***** 2026-01-28 00:44:38.694865 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:38.694875 | orchestrator | 2026-01-28 00:44:38.694885 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-01-28 00:44:38.694894 | orchestrator | Wednesday 28 January 2026 00:44:34 +0000 (0:00:00.150) 0:00:41.276 ***** 2026-01-28 00:44:38.694904 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:38.694914 | orchestrator | 2026-01-28 00:44:38.694924 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-01-28 00:44:38.694934 | orchestrator | Wednesday 28 January 2026 00:44:34 +0000 (0:00:00.458) 0:00:41.735 ***** 2026-01-28 00:44:38.694944 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:44:38.694954 | orchestrator | 2026-01-28 00:44:38.694964 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-01-28 00:44:38.694974 | orchestrator | Wednesday 28 January 2026 00:44:35 +0000 (0:00:00.163) 0:00:41.899 ***** 2026-01-28 00:44:38.694985 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'}}) 2026-01-28 00:44:38.694995 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'}}) 2026-01-28 00:44:38.695004 | orchestrator | 2026-01-28 00:44:38.695014 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-01-28 00:44:38.695024 | orchestrator | Wednesday 28 January 2026 00:44:35 +0000 (0:00:00.177) 0:00:42.076 ***** 2026-01-28 00:44:38.695034 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'}})  2026-01-28 00:44:38.695058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'}})  2026-01-28 00:44:38.695069 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:38.695079 | orchestrator | 2026-01-28 00:44:38.695088 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-01-28 00:44:38.695098 | orchestrator | Wednesday 28 January 2026 00:44:35 +0000 (0:00:00.172) 0:00:42.249 ***** 2026-01-28 00:44:38.695108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'}})  2026-01-28 00:44:38.695171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'}})  2026-01-28 00:44:38.695183 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:38.695193 | orchestrator | 2026-01-28 00:44:38.695202 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-01-28 00:44:38.695212 | orchestrator | Wednesday 28 January 2026 00:44:35 +0000 (0:00:00.168) 0:00:42.417 ***** 2026-01-28 00:44:38.695221 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'}})  2026-01-28 00:44:38.695232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'}})  2026-01-28 00:44:38.695243 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:38.695254 | orchestrator | 2026-01-28 00:44:38.695265 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-01-28 00:44:38.695276 | orchestrator | Wednesday 28 January 2026 00:44:35 +0000 (0:00:00.161) 0:00:42.578 ***** 2026-01-28 00:44:38.695287 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:44:38.695298 | orchestrator | 2026-01-28 00:44:38.695309 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-01-28 00:44:38.695320 | orchestrator | Wednesday 28 January 2026 00:44:35 +0000 (0:00:00.147) 0:00:42.726 ***** 2026-01-28 00:44:38.695330 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:44:38.695341 | orchestrator | 2026-01-28 00:44:38.695352 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-01-28 00:44:38.695363 | orchestrator | Wednesday 28 January 2026 00:44:36 +0000 (0:00:00.153) 0:00:42.880 ***** 2026-01-28 00:44:38.695374 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:38.695385 | orchestrator | 2026-01-28 00:44:38.695396 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-01-28 00:44:38.695407 | orchestrator | Wednesday 28 January 2026 00:44:36 +0000 (0:00:00.156) 0:00:43.037 ***** 2026-01-28 00:44:38.695417 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:38.695428 | orchestrator | 2026-01-28 00:44:38.695440 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-01-28 00:44:38.695451 | orchestrator | Wednesday 28 January 2026 00:44:36 +0000 (0:00:00.160) 0:00:43.197 ***** 2026-01-28 00:44:38.695462 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:38.695472 | orchestrator | 2026-01-28 00:44:38.695483 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-01-28 00:44:38.695494 | orchestrator | Wednesday 28 January 2026 00:44:36 +0000 (0:00:00.145) 0:00:43.343 ***** 2026-01-28 00:44:38.695506 | orchestrator | ok: [testbed-node-5] => { 2026-01-28 00:44:38.695517 | orchestrator |  "ceph_osd_devices": { 2026-01-28 00:44:38.695528 | orchestrator |  "sdb": { 2026-01-28 00:44:38.695553 | orchestrator |  "osd_lvm_uuid": "60e20e1d-9b2b-5d4f-86ba-deb7f624d16e" 2026-01-28 00:44:38.695565 | orchestrator |  }, 2026-01-28 00:44:38.695577 | orchestrator |  "sdc": { 2026-01-28 00:44:38.695587 | orchestrator |  "osd_lvm_uuid": "6a7f1cd8-9d71-5746-99fd-f6abb350b2d6" 2026-01-28 00:44:38.695598 | orchestrator |  } 2026-01-28 00:44:38.695609 | orchestrator |  } 2026-01-28 00:44:38.695620 | orchestrator | } 2026-01-28 00:44:38.695630 | orchestrator | 2026-01-28 00:44:38.695640 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-01-28 00:44:38.695649 | orchestrator | Wednesday 28 January 2026 00:44:36 +0000 (0:00:00.149) 0:00:43.493 ***** 2026-01-28 00:44:38.695659 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:38.695668 | orchestrator | 2026-01-28 00:44:38.695678 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-01-28 00:44:38.695687 | orchestrator | Wednesday 28 January 2026 00:44:36 +0000 (0:00:00.367) 0:00:43.860 ***** 2026-01-28 00:44:38.695697 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:38.695749 | orchestrator | 2026-01-28 00:44:38.695759 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-01-28 00:44:38.695769 | orchestrator | Wednesday 28 January 2026 00:44:37 +0000 (0:00:00.149) 0:00:44.010 ***** 2026-01-28 00:44:38.695778 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:44:38.695787 | orchestrator | 2026-01-28 00:44:38.695797 | orchestrator | TASK [Print configuration data] ************************************************ 2026-01-28 00:44:38.695806 | orchestrator | Wednesday 28 January 2026 00:44:37 +0000 (0:00:00.170) 0:00:44.180 ***** 2026-01-28 00:44:38.695816 | orchestrator | changed: [testbed-node-5] => { 2026-01-28 00:44:38.695825 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-01-28 00:44:38.695834 | orchestrator |  "ceph_osd_devices": { 2026-01-28 00:44:38.695844 | orchestrator |  "sdb": { 2026-01-28 00:44:38.695853 | orchestrator |  "osd_lvm_uuid": "60e20e1d-9b2b-5d4f-86ba-deb7f624d16e" 2026-01-28 00:44:38.695863 | orchestrator |  }, 2026-01-28 00:44:38.695873 | orchestrator |  "sdc": { 2026-01-28 00:44:38.695882 | orchestrator |  "osd_lvm_uuid": "6a7f1cd8-9d71-5746-99fd-f6abb350b2d6" 2026-01-28 00:44:38.695892 | orchestrator |  } 2026-01-28 00:44:38.695901 | orchestrator |  }, 2026-01-28 00:44:38.695911 | orchestrator |  "lvm_volumes": [ 2026-01-28 00:44:38.695920 | orchestrator |  { 2026-01-28 00:44:38.695930 | orchestrator |  "data": "osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e", 2026-01-28 00:44:38.695940 | orchestrator |  "data_vg": "ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e" 2026-01-28 00:44:38.695950 | orchestrator |  }, 2026-01-28 00:44:38.695959 | orchestrator |  { 2026-01-28 00:44:38.695969 | orchestrator |  "data": "osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6", 2026-01-28 00:44:38.695986 | orchestrator |  "data_vg": "ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6" 2026-01-28 00:44:38.695996 | orchestrator |  } 2026-01-28 00:44:38.696006 | orchestrator |  ] 2026-01-28 00:44:38.696020 | orchestrator |  } 2026-01-28 00:44:38.696030 | orchestrator | } 2026-01-28 00:44:38.696040 | orchestrator | 2026-01-28 00:44:38.696049 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-01-28 00:44:38.696059 | orchestrator | Wednesday 28 January 2026 00:44:37 +0000 (0:00:00.256) 0:00:44.436 ***** 2026-01-28 00:44:38.696068 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-28 00:44:38.696078 | orchestrator | 2026-01-28 00:44:38.696088 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:44:38.696097 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-28 00:44:38.696108 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-28 00:44:38.696118 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-28 00:44:38.696147 | orchestrator | 2026-01-28 00:44:38.696159 | orchestrator | 2026-01-28 00:44:38.696168 | orchestrator | 2026-01-28 00:44:38.696178 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:44:38.696187 | orchestrator | Wednesday 28 January 2026 00:44:38 +0000 (0:00:01.108) 0:00:45.545 ***** 2026-01-28 00:44:38.696197 | orchestrator | =============================================================================== 2026-01-28 00:44:38.696207 | orchestrator | Write configuration file ------------------------------------------------ 4.18s 2026-01-28 00:44:38.696216 | orchestrator | Add known links to the list of available block devices ------------------ 1.38s 2026-01-28 00:44:38.696225 | orchestrator | Add known partitions to the list of available block devices ------------- 1.28s 2026-01-28 00:44:38.696235 | orchestrator | Add known partitions to the list of available block devices ------------- 1.25s 2026-01-28 00:44:38.696257 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.23s 2026-01-28 00:44:38.696267 | orchestrator | Add known links to the list of available block devices ------------------ 1.06s 2026-01-28 00:44:38.696277 | orchestrator | Print configuration data ------------------------------------------------ 0.97s 2026-01-28 00:44:38.696286 | orchestrator | Add known partitions to the list of available block devices ------------- 0.94s 2026-01-28 00:44:38.696296 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2026-01-28 00:44:38.696305 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2026-01-28 00:44:38.696314 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2026-01-28 00:44:38.696324 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.79s 2026-01-28 00:44:38.696333 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2026-01-28 00:44:38.696349 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.77s 2026-01-28 00:44:39.132920 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2026-01-28 00:44:39.133007 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-01-28 00:44:39.133021 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2026-01-28 00:44:39.133031 | orchestrator | Set DB devices config data ---------------------------------------------- 0.73s 2026-01-28 00:44:39.133041 | orchestrator | Get initial list of available block devices ----------------------------- 0.72s 2026-01-28 00:44:39.133051 | orchestrator | Print WAL devices ------------------------------------------------------- 0.68s 2026-01-28 00:45:01.851571 | orchestrator | 2026-01-28 00:45:01 | INFO  | Task 067460b3-9253-40b6-857a-d6b5e306e3b1 (sync inventory) is running in background. Output coming soon. 2026-01-28 00:45:30.646766 | orchestrator | 2026-01-28 00:45:03 | INFO  | Starting group_vars file reorganization 2026-01-28 00:45:30.646875 | orchestrator | 2026-01-28 00:45:03 | INFO  | Moved 0 file(s) to their respective directories 2026-01-28 00:45:30.646892 | orchestrator | 2026-01-28 00:45:03 | INFO  | Group_vars file reorganization completed 2026-01-28 00:45:30.646904 | orchestrator | 2026-01-28 00:45:06 | INFO  | Starting variable preparation from inventory 2026-01-28 00:45:30.646938 | orchestrator | 2026-01-28 00:45:09 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-01-28 00:45:30.646950 | orchestrator | 2026-01-28 00:45:09 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-01-28 00:45:30.646972 | orchestrator | 2026-01-28 00:45:09 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-01-28 00:45:30.646990 | orchestrator | 2026-01-28 00:45:09 | INFO  | 3 file(s) written, 6 host(s) processed 2026-01-28 00:45:30.647017 | orchestrator | 2026-01-28 00:45:09 | INFO  | Variable preparation completed 2026-01-28 00:45:30.647039 | orchestrator | 2026-01-28 00:45:11 | INFO  | Starting inventory overwrite handling 2026-01-28 00:45:30.647058 | orchestrator | 2026-01-28 00:45:11 | INFO  | Handling group overwrites in 99-overwrite 2026-01-28 00:45:30.647076 | orchestrator | 2026-01-28 00:45:11 | INFO  | Removing group frr:children from 60-generic 2026-01-28 00:45:30.647094 | orchestrator | 2026-01-28 00:45:11 | INFO  | Removing group netbird:children from 50-infrastructure 2026-01-28 00:45:30.647111 | orchestrator | 2026-01-28 00:45:11 | INFO  | Removing group ceph-rgw from 50-ceph 2026-01-28 00:45:30.647186 | orchestrator | 2026-01-28 00:45:11 | INFO  | Removing group ceph-mds from 50-ceph 2026-01-28 00:45:30.647207 | orchestrator | 2026-01-28 00:45:11 | INFO  | Handling group overwrites in 20-roles 2026-01-28 00:45:30.647226 | orchestrator | 2026-01-28 00:45:11 | INFO  | Removing group k3s_node from 50-infrastructure 2026-01-28 00:45:30.647280 | orchestrator | 2026-01-28 00:45:11 | INFO  | Removed 5 group(s) in total 2026-01-28 00:45:30.647301 | orchestrator | 2026-01-28 00:45:11 | INFO  | Inventory overwrite handling completed 2026-01-28 00:45:30.647322 | orchestrator | 2026-01-28 00:45:12 | INFO  | Starting merge of inventory files 2026-01-28 00:45:30.647342 | orchestrator | 2026-01-28 00:45:12 | INFO  | Inventory files merged successfully 2026-01-28 00:45:30.647355 | orchestrator | 2026-01-28 00:45:17 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-01-28 00:45:30.647368 | orchestrator | 2026-01-28 00:45:29 | INFO  | Successfully wrote ClusterShell configuration 2026-01-28 00:45:30.647381 | orchestrator | [master 6806241] 2026-01-28-00-45 2026-01-28 00:45:30.647395 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-01-28 00:45:33.072024 | orchestrator | 2026-01-28 00:45:33 | INFO  | Task 31d8c799-90ce-4e95-982f-4ef280064f28 (ceph-create-lvm-devices) was prepared for execution. 2026-01-28 00:45:33.072159 | orchestrator | 2026-01-28 00:45:33 | INFO  | It takes a moment until task 31d8c799-90ce-4e95-982f-4ef280064f28 (ceph-create-lvm-devices) has been started and output is visible here. 2026-01-28 00:45:46.205980 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-28 00:45:46.206216 | orchestrator | 2.16.14 2026-01-28 00:45:46.206243 | orchestrator | 2026-01-28 00:45:46.206259 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-28 00:45:46.206275 | orchestrator | 2026-01-28 00:45:46.206290 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-28 00:45:46.206306 | orchestrator | Wednesday 28 January 2026 00:45:37 +0000 (0:00:00.314) 0:00:00.314 ***** 2026-01-28 00:45:46.206321 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-01-28 00:45:46.206336 | orchestrator | 2026-01-28 00:45:46.206350 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-28 00:45:46.206365 | orchestrator | Wednesday 28 January 2026 00:45:37 +0000 (0:00:00.271) 0:00:00.586 ***** 2026-01-28 00:45:46.206378 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:45:46.206393 | orchestrator | 2026-01-28 00:45:46.206408 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:45:46.206424 | orchestrator | Wednesday 28 January 2026 00:45:38 +0000 (0:00:00.302) 0:00:00.888 ***** 2026-01-28 00:45:46.206438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-01-28 00:45:46.206453 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-01-28 00:45:46.206469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-01-28 00:45:46.206484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-01-28 00:45:46.206499 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-01-28 00:45:46.206513 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-01-28 00:45:46.206527 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-01-28 00:45:46.206542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-01-28 00:45:46.206556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-01-28 00:45:46.206582 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-01-28 00:45:46.206597 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-01-28 00:45:46.206613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-01-28 00:45:46.206628 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-01-28 00:45:46.206664 | orchestrator | 2026-01-28 00:45:46.206679 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:45:46.206694 | orchestrator | Wednesday 28 January 2026 00:45:38 +0000 (0:00:00.628) 0:00:01.517 ***** 2026-01-28 00:45:46.206708 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:46.206723 | orchestrator | 2026-01-28 00:45:46.206738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:45:46.206752 | orchestrator | Wednesday 28 January 2026 00:45:39 +0000 (0:00:00.230) 0:00:01.748 ***** 2026-01-28 00:45:46.206767 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:46.206781 | orchestrator | 2026-01-28 00:45:46.206796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:45:46.206816 | orchestrator | Wednesday 28 January 2026 00:45:39 +0000 (0:00:00.257) 0:00:02.005 ***** 2026-01-28 00:45:46.206832 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:46.206847 | orchestrator | 2026-01-28 00:45:46.206862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:45:46.206876 | orchestrator | Wednesday 28 January 2026 00:45:39 +0000 (0:00:00.193) 0:00:02.199 ***** 2026-01-28 00:45:46.206888 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:46.206904 | orchestrator | 2026-01-28 00:45:46.206918 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:45:46.206933 | orchestrator | Wednesday 28 January 2026 00:45:39 +0000 (0:00:00.218) 0:00:02.417 ***** 2026-01-28 00:45:46.206949 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:46.206964 | orchestrator | 2026-01-28 00:45:46.206979 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:45:46.206994 | orchestrator | Wednesday 28 January 2026 00:45:40 +0000 (0:00:00.275) 0:00:02.693 ***** 2026-01-28 00:45:46.207009 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:46.207024 | orchestrator | 2026-01-28 00:45:46.207039 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:45:46.207054 | orchestrator | Wednesday 28 January 2026 00:45:40 +0000 (0:00:00.222) 0:00:02.915 ***** 2026-01-28 00:45:46.207070 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:46.207085 | orchestrator | 2026-01-28 00:45:46.207100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:45:46.207115 | orchestrator | Wednesday 28 January 2026 00:45:40 +0000 (0:00:00.236) 0:00:03.151 ***** 2026-01-28 00:45:46.207150 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:46.207166 | orchestrator | 2026-01-28 00:45:46.207180 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:45:46.207195 | orchestrator | Wednesday 28 January 2026 00:45:40 +0000 (0:00:00.223) 0:00:03.375 ***** 2026-01-28 00:45:46.207210 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2) 2026-01-28 00:45:46.207225 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2) 2026-01-28 00:45:46.207240 | orchestrator | 2026-01-28 00:45:46.207255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:45:46.207291 | orchestrator | Wednesday 28 January 2026 00:45:41 +0000 (0:00:00.536) 0:00:03.911 ***** 2026-01-28 00:45:46.207306 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1ddb64ab-bdcb-47d2-8e6e-2a5ea470c250) 2026-01-28 00:45:46.207320 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1ddb64ab-bdcb-47d2-8e6e-2a5ea470c250) 2026-01-28 00:45:46.207335 | orchestrator | 2026-01-28 00:45:46.207348 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:45:46.207361 | orchestrator | Wednesday 28 January 2026 00:45:42 +0000 (0:00:00.781) 0:00:04.693 ***** 2026-01-28 00:45:46.207374 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b987a0a7-5a55-41a6-ab39-84821076e11d) 2026-01-28 00:45:46.207388 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b987a0a7-5a55-41a6-ab39-84821076e11d) 2026-01-28 00:45:46.207414 | orchestrator | 2026-01-28 00:45:46.207429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:45:46.207443 | orchestrator | Wednesday 28 January 2026 00:45:42 +0000 (0:00:00.778) 0:00:05.471 ***** 2026-01-28 00:45:46.207457 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ac1a11ac-4fa1-43c2-9cb0-18dea5100f59) 2026-01-28 00:45:46.207471 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ac1a11ac-4fa1-43c2-9cb0-18dea5100f59) 2026-01-28 00:45:46.207485 | orchestrator | 2026-01-28 00:45:46.207499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:45:46.207514 | orchestrator | Wednesday 28 January 2026 00:45:43 +0000 (0:00:01.044) 0:00:06.516 ***** 2026-01-28 00:45:46.207528 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-28 00:45:46.207541 | orchestrator | 2026-01-28 00:45:46.207554 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:45:46.207570 | orchestrator | Wednesday 28 January 2026 00:45:44 +0000 (0:00:00.370) 0:00:06.887 ***** 2026-01-28 00:45:46.207587 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-01-28 00:45:46.207601 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-01-28 00:45:46.207615 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-01-28 00:45:46.207629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-01-28 00:45:46.207643 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-01-28 00:45:46.207655 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-01-28 00:45:46.207668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-01-28 00:45:46.207680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-01-28 00:45:46.207695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-01-28 00:45:46.207708 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-01-28 00:45:46.207722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-01-28 00:45:46.207736 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-01-28 00:45:46.207749 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-01-28 00:45:46.207763 | orchestrator | 2026-01-28 00:45:46.207777 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:45:46.207791 | orchestrator | Wednesday 28 January 2026 00:45:44 +0000 (0:00:00.448) 0:00:07.336 ***** 2026-01-28 00:45:46.207805 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:46.207819 | orchestrator | 2026-01-28 00:45:46.207832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:45:46.207846 | orchestrator | Wednesday 28 January 2026 00:45:44 +0000 (0:00:00.225) 0:00:07.562 ***** 2026-01-28 00:45:46.207859 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:46.207873 | orchestrator | 2026-01-28 00:45:46.207888 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:45:46.207901 | orchestrator | Wednesday 28 January 2026 00:45:45 +0000 (0:00:00.217) 0:00:07.779 ***** 2026-01-28 00:45:46.207915 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:46.207929 | orchestrator | 2026-01-28 00:45:46.207942 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:45:46.207955 | orchestrator | Wednesday 28 January 2026 00:45:45 +0000 (0:00:00.218) 0:00:07.998 ***** 2026-01-28 00:45:46.207967 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:46.207988 | orchestrator | 2026-01-28 00:45:46.208003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:45:46.208016 | orchestrator | Wednesday 28 January 2026 00:45:45 +0000 (0:00:00.222) 0:00:08.220 ***** 2026-01-28 00:45:46.208030 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:46.208043 | orchestrator | 2026-01-28 00:45:46.208057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:45:46.208070 | orchestrator | Wednesday 28 January 2026 00:45:45 +0000 (0:00:00.223) 0:00:08.444 ***** 2026-01-28 00:45:46.208084 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:46.208098 | orchestrator | 2026-01-28 00:45:46.208112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:45:46.208189 | orchestrator | Wednesday 28 January 2026 00:45:45 +0000 (0:00:00.215) 0:00:08.660 ***** 2026-01-28 00:45:46.208206 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:46.208219 | orchestrator | 2026-01-28 00:45:46.208242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:45:55.277287 | orchestrator | Wednesday 28 January 2026 00:45:46 +0000 (0:00:00.223) 0:00:08.884 ***** 2026-01-28 00:45:55.277398 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:55.277416 | orchestrator | 2026-01-28 00:45:55.277429 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:45:55.277441 | orchestrator | Wednesday 28 January 2026 00:45:46 +0000 (0:00:00.229) 0:00:09.113 ***** 2026-01-28 00:45:55.277452 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-01-28 00:45:55.277463 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-01-28 00:45:55.277475 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-01-28 00:45:55.277485 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-01-28 00:45:55.277496 | orchestrator | 2026-01-28 00:45:55.277507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:45:55.277518 | orchestrator | Wednesday 28 January 2026 00:45:47 +0000 (0:00:01.206) 0:00:10.320 ***** 2026-01-28 00:45:55.277529 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:55.277539 | orchestrator | 2026-01-28 00:45:55.277550 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:45:55.277561 | orchestrator | Wednesday 28 January 2026 00:45:47 +0000 (0:00:00.254) 0:00:10.574 ***** 2026-01-28 00:45:55.277572 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:55.277583 | orchestrator | 2026-01-28 00:45:55.277593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:45:55.277604 | orchestrator | Wednesday 28 January 2026 00:45:48 +0000 (0:00:00.277) 0:00:10.851 ***** 2026-01-28 00:45:55.277615 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:55.277627 | orchestrator | 2026-01-28 00:45:55.277637 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:45:55.277648 | orchestrator | Wednesday 28 January 2026 00:45:48 +0000 (0:00:00.232) 0:00:11.084 ***** 2026-01-28 00:45:55.277659 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:55.277670 | orchestrator | 2026-01-28 00:45:55.277681 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-28 00:45:55.277691 | orchestrator | Wednesday 28 January 2026 00:45:48 +0000 (0:00:00.222) 0:00:11.307 ***** 2026-01-28 00:45:55.277702 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:55.277713 | orchestrator | 2026-01-28 00:45:55.277724 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-28 00:45:55.277734 | orchestrator | Wednesday 28 January 2026 00:45:48 +0000 (0:00:00.176) 0:00:11.484 ***** 2026-01-28 00:45:55.277764 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'}}) 2026-01-28 00:45:55.277776 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cf0ea652-88a6-5aa8-929a-ed9131fd0cef'}}) 2026-01-28 00:45:55.277790 | orchestrator | 2026-01-28 00:45:55.277803 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-28 00:45:55.277837 | orchestrator | Wednesday 28 January 2026 00:45:49 +0000 (0:00:00.300) 0:00:11.784 ***** 2026-01-28 00:45:55.277852 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'}) 2026-01-28 00:45:55.277865 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'}) 2026-01-28 00:45:55.277877 | orchestrator | 2026-01-28 00:45:55.277890 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-28 00:45:55.277907 | orchestrator | Wednesday 28 January 2026 00:45:51 +0000 (0:00:01.979) 0:00:13.764 ***** 2026-01-28 00:45:55.277920 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:45:55.277934 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:45:55.277946 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:55.277958 | orchestrator | 2026-01-28 00:45:55.277971 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-28 00:45:55.277983 | orchestrator | Wednesday 28 January 2026 00:45:51 +0000 (0:00:00.173) 0:00:13.937 ***** 2026-01-28 00:45:55.277996 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'}) 2026-01-28 00:45:55.278008 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'}) 2026-01-28 00:45:55.278076 | orchestrator | 2026-01-28 00:45:55.278090 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-28 00:45:55.278103 | orchestrator | Wednesday 28 January 2026 00:45:52 +0000 (0:00:01.455) 0:00:15.393 ***** 2026-01-28 00:45:55.278117 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:45:55.278149 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:45:55.278160 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:55.278171 | orchestrator | 2026-01-28 00:45:55.278182 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-28 00:45:55.278193 | orchestrator | Wednesday 28 January 2026 00:45:52 +0000 (0:00:00.204) 0:00:15.598 ***** 2026-01-28 00:45:55.278221 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:55.278233 | orchestrator | 2026-01-28 00:45:55.278244 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-28 00:45:55.278255 | orchestrator | Wednesday 28 January 2026 00:45:53 +0000 (0:00:00.162) 0:00:15.760 ***** 2026-01-28 00:45:55.278266 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:45:55.278277 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:45:55.278288 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:55.278299 | orchestrator | 2026-01-28 00:45:55.278309 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-28 00:45:55.278320 | orchestrator | Wednesday 28 January 2026 00:45:53 +0000 (0:00:00.488) 0:00:16.249 ***** 2026-01-28 00:45:55.278331 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:55.278342 | orchestrator | 2026-01-28 00:45:55.278353 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-28 00:45:55.278364 | orchestrator | Wednesday 28 January 2026 00:45:53 +0000 (0:00:00.176) 0:00:16.426 ***** 2026-01-28 00:45:55.278389 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:45:55.278407 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:45:55.278426 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:55.278445 | orchestrator | 2026-01-28 00:45:55.278462 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-28 00:45:55.278479 | orchestrator | Wednesday 28 January 2026 00:45:53 +0000 (0:00:00.196) 0:00:16.622 ***** 2026-01-28 00:45:55.278497 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:55.278514 | orchestrator | 2026-01-28 00:45:55.278532 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-28 00:45:55.278550 | orchestrator | Wednesday 28 January 2026 00:45:54 +0000 (0:00:00.169) 0:00:16.791 ***** 2026-01-28 00:45:55.278569 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:45:55.278587 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:45:55.278607 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:55.278625 | orchestrator | 2026-01-28 00:45:55.278644 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-28 00:45:55.278655 | orchestrator | Wednesday 28 January 2026 00:45:54 +0000 (0:00:00.190) 0:00:16.982 ***** 2026-01-28 00:45:55.278666 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:45:55.278677 | orchestrator | 2026-01-28 00:45:55.278688 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-28 00:45:55.278699 | orchestrator | Wednesday 28 January 2026 00:45:54 +0000 (0:00:00.184) 0:00:17.166 ***** 2026-01-28 00:45:55.278716 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:45:55.278728 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:45:55.278738 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:55.278749 | orchestrator | 2026-01-28 00:45:55.278760 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-28 00:45:55.278771 | orchestrator | Wednesday 28 January 2026 00:45:54 +0000 (0:00:00.224) 0:00:17.391 ***** 2026-01-28 00:45:55.278781 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:45:55.278792 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:45:55.278803 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:55.278814 | orchestrator | 2026-01-28 00:45:55.278825 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-28 00:45:55.278835 | orchestrator | Wednesday 28 January 2026 00:45:54 +0000 (0:00:00.170) 0:00:17.561 ***** 2026-01-28 00:45:55.278846 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:45:55.278857 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:45:55.278868 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:55.278879 | orchestrator | 2026-01-28 00:45:55.278889 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-28 00:45:55.278900 | orchestrator | Wednesday 28 January 2026 00:45:55 +0000 (0:00:00.211) 0:00:17.773 ***** 2026-01-28 00:45:55.278927 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:45:55.278938 | orchestrator | 2026-01-28 00:45:55.278949 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-28 00:45:55.278970 | orchestrator | Wednesday 28 January 2026 00:45:55 +0000 (0:00:00.189) 0:00:17.963 ***** 2026-01-28 00:46:02.291243 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.291447 | orchestrator | 2026-01-28 00:46:02.291487 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-28 00:46:02.291502 | orchestrator | Wednesday 28 January 2026 00:45:55 +0000 (0:00:00.130) 0:00:18.093 ***** 2026-01-28 00:46:02.291514 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.291526 | orchestrator | 2026-01-28 00:46:02.291537 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-28 00:46:02.291548 | orchestrator | Wednesday 28 January 2026 00:45:55 +0000 (0:00:00.128) 0:00:18.222 ***** 2026-01-28 00:46:02.291559 | orchestrator | ok: [testbed-node-3] => { 2026-01-28 00:46:02.291599 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-28 00:46:02.291615 | orchestrator | } 2026-01-28 00:46:02.291629 | orchestrator | 2026-01-28 00:46:02.291641 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-28 00:46:02.291655 | orchestrator | Wednesday 28 January 2026 00:45:55 +0000 (0:00:00.414) 0:00:18.637 ***** 2026-01-28 00:46:02.291667 | orchestrator | ok: [testbed-node-3] => { 2026-01-28 00:46:02.291701 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-28 00:46:02.291715 | orchestrator | } 2026-01-28 00:46:02.291728 | orchestrator | 2026-01-28 00:46:02.291760 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-28 00:46:02.291774 | orchestrator | Wednesday 28 January 2026 00:45:56 +0000 (0:00:00.156) 0:00:18.793 ***** 2026-01-28 00:46:02.291786 | orchestrator | ok: [testbed-node-3] => { 2026-01-28 00:46:02.291800 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-28 00:46:02.291813 | orchestrator | } 2026-01-28 00:46:02.291826 | orchestrator | 2026-01-28 00:46:02.291838 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-28 00:46:02.291851 | orchestrator | Wednesday 28 January 2026 00:45:56 +0000 (0:00:00.161) 0:00:18.954 ***** 2026-01-28 00:46:02.291864 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:46:02.291877 | orchestrator | 2026-01-28 00:46:02.291889 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-28 00:46:02.291902 | orchestrator | Wednesday 28 January 2026 00:45:56 +0000 (0:00:00.694) 0:00:19.649 ***** 2026-01-28 00:46:02.291915 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:46:02.291928 | orchestrator | 2026-01-28 00:46:02.291940 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-28 00:46:02.291951 | orchestrator | Wednesday 28 January 2026 00:45:57 +0000 (0:00:00.557) 0:00:20.207 ***** 2026-01-28 00:46:02.291962 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:46:02.291973 | orchestrator | 2026-01-28 00:46:02.291984 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-28 00:46:02.291995 | orchestrator | Wednesday 28 January 2026 00:45:58 +0000 (0:00:00.544) 0:00:20.751 ***** 2026-01-28 00:46:02.292006 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:46:02.292017 | orchestrator | 2026-01-28 00:46:02.292028 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-28 00:46:02.292040 | orchestrator | Wednesday 28 January 2026 00:45:58 +0000 (0:00:00.178) 0:00:20.930 ***** 2026-01-28 00:46:02.292050 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.292062 | orchestrator | 2026-01-28 00:46:02.292073 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-28 00:46:02.292084 | orchestrator | Wednesday 28 January 2026 00:45:58 +0000 (0:00:00.132) 0:00:21.062 ***** 2026-01-28 00:46:02.292095 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.292106 | orchestrator | 2026-01-28 00:46:02.292117 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-28 00:46:02.292186 | orchestrator | Wednesday 28 January 2026 00:45:58 +0000 (0:00:00.139) 0:00:21.202 ***** 2026-01-28 00:46:02.292227 | orchestrator | ok: [testbed-node-3] => { 2026-01-28 00:46:02.292239 | orchestrator |  "vgs_report": { 2026-01-28 00:46:02.292251 | orchestrator |  "vg": [] 2026-01-28 00:46:02.292262 | orchestrator |  } 2026-01-28 00:46:02.292273 | orchestrator | } 2026-01-28 00:46:02.292284 | orchestrator | 2026-01-28 00:46:02.292295 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-28 00:46:02.292306 | orchestrator | Wednesday 28 January 2026 00:45:58 +0000 (0:00:00.143) 0:00:21.345 ***** 2026-01-28 00:46:02.292317 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.292327 | orchestrator | 2026-01-28 00:46:02.292355 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-28 00:46:02.292367 | orchestrator | Wednesday 28 January 2026 00:45:58 +0000 (0:00:00.155) 0:00:21.501 ***** 2026-01-28 00:46:02.292377 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.292388 | orchestrator | 2026-01-28 00:46:02.292399 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-28 00:46:02.292410 | orchestrator | Wednesday 28 January 2026 00:45:58 +0000 (0:00:00.162) 0:00:21.664 ***** 2026-01-28 00:46:02.292420 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.292431 | orchestrator | 2026-01-28 00:46:02.292442 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-28 00:46:02.292453 | orchestrator | Wednesday 28 January 2026 00:45:59 +0000 (0:00:00.434) 0:00:22.098 ***** 2026-01-28 00:46:02.292464 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.292474 | orchestrator | 2026-01-28 00:46:02.292485 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-28 00:46:02.292496 | orchestrator | Wednesday 28 January 2026 00:45:59 +0000 (0:00:00.149) 0:00:22.248 ***** 2026-01-28 00:46:02.292507 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.292518 | orchestrator | 2026-01-28 00:46:02.292528 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-28 00:46:02.292539 | orchestrator | Wednesday 28 January 2026 00:45:59 +0000 (0:00:00.153) 0:00:22.402 ***** 2026-01-28 00:46:02.292550 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.292561 | orchestrator | 2026-01-28 00:46:02.292572 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-28 00:46:02.292582 | orchestrator | Wednesday 28 January 2026 00:45:59 +0000 (0:00:00.155) 0:00:22.557 ***** 2026-01-28 00:46:02.292593 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.292604 | orchestrator | 2026-01-28 00:46:02.292615 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-28 00:46:02.292625 | orchestrator | Wednesday 28 January 2026 00:46:00 +0000 (0:00:00.179) 0:00:22.736 ***** 2026-01-28 00:46:02.292654 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.292666 | orchestrator | 2026-01-28 00:46:02.292677 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-28 00:46:02.292688 | orchestrator | Wednesday 28 January 2026 00:46:00 +0000 (0:00:00.149) 0:00:22.885 ***** 2026-01-28 00:46:02.292698 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.292709 | orchestrator | 2026-01-28 00:46:02.292720 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-28 00:46:02.292731 | orchestrator | Wednesday 28 January 2026 00:46:00 +0000 (0:00:00.144) 0:00:23.030 ***** 2026-01-28 00:46:02.292765 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.292794 | orchestrator | 2026-01-28 00:46:02.292806 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-28 00:46:02.292817 | orchestrator | Wednesday 28 January 2026 00:46:00 +0000 (0:00:00.147) 0:00:23.178 ***** 2026-01-28 00:46:02.292849 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.292862 | orchestrator | 2026-01-28 00:46:02.292873 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-28 00:46:02.292884 | orchestrator | Wednesday 28 January 2026 00:46:00 +0000 (0:00:00.149) 0:00:23.327 ***** 2026-01-28 00:46:02.292905 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.292916 | orchestrator | 2026-01-28 00:46:02.292927 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-28 00:46:02.292938 | orchestrator | Wednesday 28 January 2026 00:46:00 +0000 (0:00:00.142) 0:00:23.469 ***** 2026-01-28 00:46:02.292948 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.292959 | orchestrator | 2026-01-28 00:46:02.292970 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-28 00:46:02.292981 | orchestrator | Wednesday 28 January 2026 00:46:00 +0000 (0:00:00.156) 0:00:23.626 ***** 2026-01-28 00:46:02.292992 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.293003 | orchestrator | 2026-01-28 00:46:02.293013 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-28 00:46:02.293024 | orchestrator | Wednesday 28 January 2026 00:46:01 +0000 (0:00:00.143) 0:00:23.770 ***** 2026-01-28 00:46:02.293036 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:46:02.293050 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:46:02.293063 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.293082 | orchestrator | 2026-01-28 00:46:02.293101 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-28 00:46:02.293118 | orchestrator | Wednesday 28 January 2026 00:46:01 +0000 (0:00:00.376) 0:00:24.146 ***** 2026-01-28 00:46:02.293159 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:46:02.293180 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:46:02.293199 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.293219 | orchestrator | 2026-01-28 00:46:02.293231 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-28 00:46:02.293248 | orchestrator | Wednesday 28 January 2026 00:46:01 +0000 (0:00:00.164) 0:00:24.311 ***** 2026-01-28 00:46:02.293260 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:46:02.293271 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:46:02.293281 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.293292 | orchestrator | 2026-01-28 00:46:02.293303 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-28 00:46:02.293314 | orchestrator | Wednesday 28 January 2026 00:46:01 +0000 (0:00:00.159) 0:00:24.470 ***** 2026-01-28 00:46:02.293324 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:46:02.293335 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:46:02.293346 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.293383 | orchestrator | 2026-01-28 00:46:02.293395 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-28 00:46:02.293426 | orchestrator | Wednesday 28 January 2026 00:46:01 +0000 (0:00:00.175) 0:00:24.646 ***** 2026-01-28 00:46:02.293437 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:46:02.293448 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:46:02.293469 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:02.293480 | orchestrator | 2026-01-28 00:46:02.293490 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-28 00:46:02.293501 | orchestrator | Wednesday 28 January 2026 00:46:02 +0000 (0:00:00.166) 0:00:24.813 ***** 2026-01-28 00:46:02.293523 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:46:08.035205 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:46:08.035318 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:08.035334 | orchestrator | 2026-01-28 00:46:08.035346 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-28 00:46:08.035358 | orchestrator | Wednesday 28 January 2026 00:46:02 +0000 (0:00:00.162) 0:00:24.976 ***** 2026-01-28 00:46:08.035368 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:46:08.035379 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:46:08.035389 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:08.035398 | orchestrator | 2026-01-28 00:46:08.035408 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-28 00:46:08.035418 | orchestrator | Wednesday 28 January 2026 00:46:02 +0000 (0:00:00.170) 0:00:25.146 ***** 2026-01-28 00:46:08.035428 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:46:08.035438 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:46:08.035448 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:08.035458 | orchestrator | 2026-01-28 00:46:08.035468 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-28 00:46:08.035478 | orchestrator | Wednesday 28 January 2026 00:46:02 +0000 (0:00:00.162) 0:00:25.308 ***** 2026-01-28 00:46:08.035487 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:46:08.035498 | orchestrator | 2026-01-28 00:46:08.035507 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-28 00:46:08.035517 | orchestrator | Wednesday 28 January 2026 00:46:03 +0000 (0:00:00.550) 0:00:25.859 ***** 2026-01-28 00:46:08.035527 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:46:08.035537 | orchestrator | 2026-01-28 00:46:08.035547 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-28 00:46:08.035556 | orchestrator | Wednesday 28 January 2026 00:46:03 +0000 (0:00:00.558) 0:00:26.417 ***** 2026-01-28 00:46:08.035566 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:46:08.035575 | orchestrator | 2026-01-28 00:46:08.035585 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-28 00:46:08.035595 | orchestrator | Wednesday 28 January 2026 00:46:03 +0000 (0:00:00.159) 0:00:26.576 ***** 2026-01-28 00:46:08.035604 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'vg_name': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'}) 2026-01-28 00:46:08.035615 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'vg_name': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'}) 2026-01-28 00:46:08.035625 | orchestrator | 2026-01-28 00:46:08.035635 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-28 00:46:08.035644 | orchestrator | Wednesday 28 January 2026 00:46:04 +0000 (0:00:00.183) 0:00:26.760 ***** 2026-01-28 00:46:08.035654 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:46:08.035688 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:46:08.035699 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:08.035708 | orchestrator | 2026-01-28 00:46:08.035718 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-28 00:46:08.035727 | orchestrator | Wednesday 28 January 2026 00:46:04 +0000 (0:00:00.435) 0:00:27.195 ***** 2026-01-28 00:46:08.035737 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:46:08.035746 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:46:08.035756 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:08.035765 | orchestrator | 2026-01-28 00:46:08.035776 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-28 00:46:08.035785 | orchestrator | Wednesday 28 January 2026 00:46:04 +0000 (0:00:00.162) 0:00:27.358 ***** 2026-01-28 00:46:08.035795 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'})  2026-01-28 00:46:08.035804 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'})  2026-01-28 00:46:08.035814 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:46:08.035823 | orchestrator | 2026-01-28 00:46:08.035833 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-28 00:46:08.035843 | orchestrator | Wednesday 28 January 2026 00:46:04 +0000 (0:00:00.174) 0:00:27.532 ***** 2026-01-28 00:46:08.035869 | orchestrator | ok: [testbed-node-3] => { 2026-01-28 00:46:08.035880 | orchestrator |  "lvm_report": { 2026-01-28 00:46:08.035891 | orchestrator |  "lv": [ 2026-01-28 00:46:08.035901 | orchestrator |  { 2026-01-28 00:46:08.035911 | orchestrator |  "lv_name": "osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe", 2026-01-28 00:46:08.035922 | orchestrator |  "vg_name": "ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe" 2026-01-28 00:46:08.035931 | orchestrator |  }, 2026-01-28 00:46:08.035941 | orchestrator |  { 2026-01-28 00:46:08.035951 | orchestrator |  "lv_name": "osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef", 2026-01-28 00:46:08.035960 | orchestrator |  "vg_name": "ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef" 2026-01-28 00:46:08.035970 | orchestrator |  } 2026-01-28 00:46:08.035979 | orchestrator |  ], 2026-01-28 00:46:08.035989 | orchestrator |  "pv": [ 2026-01-28 00:46:08.035999 | orchestrator |  { 2026-01-28 00:46:08.036008 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-28 00:46:08.036018 | orchestrator |  "vg_name": "ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe" 2026-01-28 00:46:08.036027 | orchestrator |  }, 2026-01-28 00:46:08.036037 | orchestrator |  { 2026-01-28 00:46:08.036046 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-28 00:46:08.036071 | orchestrator |  "vg_name": "ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef" 2026-01-28 00:46:08.036081 | orchestrator |  } 2026-01-28 00:46:08.036091 | orchestrator |  ] 2026-01-28 00:46:08.036100 | orchestrator |  } 2026-01-28 00:46:08.036110 | orchestrator | } 2026-01-28 00:46:08.036120 | orchestrator | 2026-01-28 00:46:08.036151 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-28 00:46:08.036161 | orchestrator | 2026-01-28 00:46:08.036171 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-28 00:46:08.036180 | orchestrator | Wednesday 28 January 2026 00:46:05 +0000 (0:00:00.292) 0:00:27.824 ***** 2026-01-28 00:46:08.036200 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-01-28 00:46:08.036209 | orchestrator | 2026-01-28 00:46:08.036219 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-28 00:46:08.036228 | orchestrator | Wednesday 28 January 2026 00:46:05 +0000 (0:00:00.265) 0:00:28.090 ***** 2026-01-28 00:46:08.036238 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:46:08.036248 | orchestrator | 2026-01-28 00:46:08.036257 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:08.036267 | orchestrator | Wednesday 28 January 2026 00:46:05 +0000 (0:00:00.256) 0:00:28.347 ***** 2026-01-28 00:46:08.036277 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-01-28 00:46:08.036286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-01-28 00:46:08.036296 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-01-28 00:46:08.036306 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-01-28 00:46:08.036315 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-01-28 00:46:08.036325 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-01-28 00:46:08.036335 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-01-28 00:46:08.036349 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-01-28 00:46:08.036359 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-01-28 00:46:08.036368 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-01-28 00:46:08.036378 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-01-28 00:46:08.036387 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-01-28 00:46:08.036397 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-01-28 00:46:08.036407 | orchestrator | 2026-01-28 00:46:08.036416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:08.036426 | orchestrator | Wednesday 28 January 2026 00:46:06 +0000 (0:00:00.488) 0:00:28.835 ***** 2026-01-28 00:46:08.036435 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:08.036445 | orchestrator | 2026-01-28 00:46:08.036454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:08.036464 | orchestrator | Wednesday 28 January 2026 00:46:06 +0000 (0:00:00.248) 0:00:29.083 ***** 2026-01-28 00:46:08.036473 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:08.036483 | orchestrator | 2026-01-28 00:46:08.036493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:08.036502 | orchestrator | Wednesday 28 January 2026 00:46:06 +0000 (0:00:00.212) 0:00:29.296 ***** 2026-01-28 00:46:08.036512 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:08.036521 | orchestrator | 2026-01-28 00:46:08.036531 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:08.036540 | orchestrator | Wednesday 28 January 2026 00:46:07 +0000 (0:00:00.756) 0:00:30.053 ***** 2026-01-28 00:46:08.036550 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:08.036559 | orchestrator | 2026-01-28 00:46:08.036569 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:08.036579 | orchestrator | Wednesday 28 January 2026 00:46:07 +0000 (0:00:00.235) 0:00:30.289 ***** 2026-01-28 00:46:08.036588 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:08.036597 | orchestrator | 2026-01-28 00:46:08.036607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:08.036617 | orchestrator | Wednesday 28 January 2026 00:46:07 +0000 (0:00:00.208) 0:00:30.497 ***** 2026-01-28 00:46:08.036632 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:08.036641 | orchestrator | 2026-01-28 00:46:08.036657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:19.030393 | orchestrator | Wednesday 28 January 2026 00:46:08 +0000 (0:00:00.217) 0:00:30.715 ***** 2026-01-28 00:46:19.030513 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:19.030537 | orchestrator | 2026-01-28 00:46:19.030554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:19.030572 | orchestrator | Wednesday 28 January 2026 00:46:08 +0000 (0:00:00.196) 0:00:30.912 ***** 2026-01-28 00:46:19.030590 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:19.030606 | orchestrator | 2026-01-28 00:46:19.030624 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:19.030641 | orchestrator | Wednesday 28 January 2026 00:46:08 +0000 (0:00:00.218) 0:00:31.131 ***** 2026-01-28 00:46:19.030657 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e) 2026-01-28 00:46:19.030673 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e) 2026-01-28 00:46:19.030683 | orchestrator | 2026-01-28 00:46:19.030693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:19.030703 | orchestrator | Wednesday 28 January 2026 00:46:08 +0000 (0:00:00.406) 0:00:31.537 ***** 2026-01-28 00:46:19.030713 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_73ccfbcc-79e9-4762-9da0-bda867b64772) 2026-01-28 00:46:19.030723 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_73ccfbcc-79e9-4762-9da0-bda867b64772) 2026-01-28 00:46:19.030732 | orchestrator | 2026-01-28 00:46:19.030745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:19.030762 | orchestrator | Wednesday 28 January 2026 00:46:09 +0000 (0:00:00.395) 0:00:31.933 ***** 2026-01-28 00:46:19.030778 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_eefe2a4b-8f8c-4873-9530-ac9327ae5f1f) 2026-01-28 00:46:19.030794 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_eefe2a4b-8f8c-4873-9530-ac9327ae5f1f) 2026-01-28 00:46:19.030810 | orchestrator | 2026-01-28 00:46:19.030826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:19.030842 | orchestrator | Wednesday 28 January 2026 00:46:09 +0000 (0:00:00.414) 0:00:32.348 ***** 2026-01-28 00:46:19.030857 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f01b7d80-edbf-4a4b-9318-1b6b20cc249d) 2026-01-28 00:46:19.030874 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f01b7d80-edbf-4a4b-9318-1b6b20cc249d) 2026-01-28 00:46:19.030891 | orchestrator | 2026-01-28 00:46:19.030909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:19.030926 | orchestrator | Wednesday 28 January 2026 00:46:10 +0000 (0:00:00.588) 0:00:32.936 ***** 2026-01-28 00:46:19.030944 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-28 00:46:19.030962 | orchestrator | 2026-01-28 00:46:19.030980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:19.030997 | orchestrator | Wednesday 28 January 2026 00:46:10 +0000 (0:00:00.509) 0:00:33.446 ***** 2026-01-28 00:46:19.031034 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-01-28 00:46:19.031048 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-01-28 00:46:19.031059 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-01-28 00:46:19.031071 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-01-28 00:46:19.031082 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-01-28 00:46:19.031093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-01-28 00:46:19.031173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-01-28 00:46:19.031185 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-01-28 00:46:19.031195 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-01-28 00:46:19.031204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-01-28 00:46:19.031214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-01-28 00:46:19.031224 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-01-28 00:46:19.031233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-01-28 00:46:19.031243 | orchestrator | 2026-01-28 00:46:19.031252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:19.031262 | orchestrator | Wednesday 28 January 2026 00:46:11 +0000 (0:00:00.774) 0:00:34.220 ***** 2026-01-28 00:46:19.031272 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:19.031281 | orchestrator | 2026-01-28 00:46:19.031291 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:19.031301 | orchestrator | Wednesday 28 January 2026 00:46:11 +0000 (0:00:00.219) 0:00:34.439 ***** 2026-01-28 00:46:19.031310 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:19.031320 | orchestrator | 2026-01-28 00:46:19.031330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:19.031339 | orchestrator | Wednesday 28 January 2026 00:46:11 +0000 (0:00:00.217) 0:00:34.657 ***** 2026-01-28 00:46:19.031349 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:19.031358 | orchestrator | 2026-01-28 00:46:19.031386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:19.031397 | orchestrator | Wednesday 28 January 2026 00:46:12 +0000 (0:00:00.278) 0:00:34.935 ***** 2026-01-28 00:46:19.031407 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:19.031416 | orchestrator | 2026-01-28 00:46:19.031426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:19.031435 | orchestrator | Wednesday 28 January 2026 00:46:12 +0000 (0:00:00.244) 0:00:35.180 ***** 2026-01-28 00:46:19.031445 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:19.031454 | orchestrator | 2026-01-28 00:46:19.031464 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:19.031473 | orchestrator | Wednesday 28 January 2026 00:46:12 +0000 (0:00:00.194) 0:00:35.374 ***** 2026-01-28 00:46:19.031483 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:19.031492 | orchestrator | 2026-01-28 00:46:19.031502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:19.031511 | orchestrator | Wednesday 28 January 2026 00:46:12 +0000 (0:00:00.191) 0:00:35.565 ***** 2026-01-28 00:46:19.031521 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:19.031530 | orchestrator | 2026-01-28 00:46:19.031540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:19.031549 | orchestrator | Wednesday 28 January 2026 00:46:13 +0000 (0:00:00.204) 0:00:35.769 ***** 2026-01-28 00:46:19.031558 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:19.031568 | orchestrator | 2026-01-28 00:46:19.031577 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:19.031587 | orchestrator | Wednesday 28 January 2026 00:46:13 +0000 (0:00:00.198) 0:00:35.969 ***** 2026-01-28 00:46:19.031596 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-01-28 00:46:19.031606 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-01-28 00:46:19.031616 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-01-28 00:46:19.031625 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-01-28 00:46:19.031635 | orchestrator | 2026-01-28 00:46:19.031645 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:19.031662 | orchestrator | Wednesday 28 January 2026 00:46:14 +0000 (0:00:00.809) 0:00:36.779 ***** 2026-01-28 00:46:19.031671 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:19.031681 | orchestrator | 2026-01-28 00:46:19.031690 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:19.031700 | orchestrator | Wednesday 28 January 2026 00:46:14 +0000 (0:00:00.205) 0:00:36.984 ***** 2026-01-28 00:46:19.031709 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:19.031719 | orchestrator | 2026-01-28 00:46:19.031728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:19.031738 | orchestrator | Wednesday 28 January 2026 00:46:14 +0000 (0:00:00.530) 0:00:37.515 ***** 2026-01-28 00:46:19.031747 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:19.031757 | orchestrator | 2026-01-28 00:46:19.031766 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:19.031776 | orchestrator | Wednesday 28 January 2026 00:46:15 +0000 (0:00:00.214) 0:00:37.729 ***** 2026-01-28 00:46:19.031785 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:19.031794 | orchestrator | 2026-01-28 00:46:19.031804 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-28 00:46:19.031814 | orchestrator | Wednesday 28 January 2026 00:46:15 +0000 (0:00:00.198) 0:00:37.928 ***** 2026-01-28 00:46:19.031823 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:19.031833 | orchestrator | 2026-01-28 00:46:19.031842 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-28 00:46:19.031852 | orchestrator | Wednesday 28 January 2026 00:46:15 +0000 (0:00:00.138) 0:00:38.067 ***** 2026-01-28 00:46:19.031861 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'e01643e5-7b60-5b49-bc8a-cfec0728964e'}}) 2026-01-28 00:46:19.031871 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ae2f77e7-beca-5176-aee2-b01d14f9def4'}}) 2026-01-28 00:46:19.031881 | orchestrator | 2026-01-28 00:46:19.031891 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-28 00:46:19.031900 | orchestrator | Wednesday 28 January 2026 00:46:15 +0000 (0:00:00.175) 0:00:38.242 ***** 2026-01-28 00:46:19.031911 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'}) 2026-01-28 00:46:19.031922 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'}) 2026-01-28 00:46:19.031931 | orchestrator | 2026-01-28 00:46:19.031941 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-28 00:46:19.031950 | orchestrator | Wednesday 28 January 2026 00:46:17 +0000 (0:00:01.861) 0:00:40.103 ***** 2026-01-28 00:46:19.031960 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:19.031977 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:19.031994 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:19.032010 | orchestrator | 2026-01-28 00:46:19.032026 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-28 00:46:19.032042 | orchestrator | Wednesday 28 January 2026 00:46:17 +0000 (0:00:00.162) 0:00:40.266 ***** 2026-01-28 00:46:19.032059 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'}) 2026-01-28 00:46:19.032084 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'}) 2026-01-28 00:46:24.479564 | orchestrator | 2026-01-28 00:46:24.479693 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-28 00:46:24.479752 | orchestrator | Wednesday 28 January 2026 00:46:19 +0000 (0:00:01.445) 0:00:41.711 ***** 2026-01-28 00:46:24.479786 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:24.479801 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:24.479813 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.479825 | orchestrator | 2026-01-28 00:46:24.479837 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-28 00:46:24.479847 | orchestrator | Wednesday 28 January 2026 00:46:19 +0000 (0:00:00.150) 0:00:41.861 ***** 2026-01-28 00:46:24.479858 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.479870 | orchestrator | 2026-01-28 00:46:24.479881 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-28 00:46:24.479892 | orchestrator | Wednesday 28 January 2026 00:46:19 +0000 (0:00:00.137) 0:00:41.999 ***** 2026-01-28 00:46:24.479903 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:24.479914 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:24.479925 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.479935 | orchestrator | 2026-01-28 00:46:24.479946 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-28 00:46:24.479957 | orchestrator | Wednesday 28 January 2026 00:46:19 +0000 (0:00:00.158) 0:00:42.158 ***** 2026-01-28 00:46:24.479968 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.479978 | orchestrator | 2026-01-28 00:46:24.479989 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-28 00:46:24.480000 | orchestrator | Wednesday 28 January 2026 00:46:19 +0000 (0:00:00.129) 0:00:42.287 ***** 2026-01-28 00:46:24.480011 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:24.480022 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:24.480033 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.480044 | orchestrator | 2026-01-28 00:46:24.480055 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-28 00:46:24.480070 | orchestrator | Wednesday 28 January 2026 00:46:19 +0000 (0:00:00.331) 0:00:42.619 ***** 2026-01-28 00:46:24.480083 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.480095 | orchestrator | 2026-01-28 00:46:24.480107 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-28 00:46:24.480121 | orchestrator | Wednesday 28 January 2026 00:46:20 +0000 (0:00:00.128) 0:00:42.747 ***** 2026-01-28 00:46:24.480162 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:24.480176 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:24.480189 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.480201 | orchestrator | 2026-01-28 00:46:24.480213 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-28 00:46:24.480225 | orchestrator | Wednesday 28 January 2026 00:46:20 +0000 (0:00:00.148) 0:00:42.896 ***** 2026-01-28 00:46:24.480237 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:46:24.480250 | orchestrator | 2026-01-28 00:46:24.480263 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-28 00:46:24.480285 | orchestrator | Wednesday 28 January 2026 00:46:20 +0000 (0:00:00.134) 0:00:43.030 ***** 2026-01-28 00:46:24.480298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:24.480309 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:24.480320 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.480331 | orchestrator | 2026-01-28 00:46:24.480341 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-28 00:46:24.480352 | orchestrator | Wednesday 28 January 2026 00:46:20 +0000 (0:00:00.151) 0:00:43.182 ***** 2026-01-28 00:46:24.480363 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:24.480374 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:24.480384 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.480395 | orchestrator | 2026-01-28 00:46:24.480406 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-28 00:46:24.480434 | orchestrator | Wednesday 28 January 2026 00:46:20 +0000 (0:00:00.153) 0:00:43.336 ***** 2026-01-28 00:46:24.480445 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:24.480456 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:24.480467 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.480478 | orchestrator | 2026-01-28 00:46:24.480489 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-28 00:46:24.480500 | orchestrator | Wednesday 28 January 2026 00:46:20 +0000 (0:00:00.143) 0:00:43.480 ***** 2026-01-28 00:46:24.480510 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.480521 | orchestrator | 2026-01-28 00:46:24.480532 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-28 00:46:24.480543 | orchestrator | Wednesday 28 January 2026 00:46:20 +0000 (0:00:00.132) 0:00:43.612 ***** 2026-01-28 00:46:24.480553 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.480564 | orchestrator | 2026-01-28 00:46:24.480575 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-28 00:46:24.480586 | orchestrator | Wednesday 28 January 2026 00:46:21 +0000 (0:00:00.138) 0:00:43.750 ***** 2026-01-28 00:46:24.480596 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.480607 | orchestrator | 2026-01-28 00:46:24.480618 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-28 00:46:24.480629 | orchestrator | Wednesday 28 January 2026 00:46:21 +0000 (0:00:00.128) 0:00:43.879 ***** 2026-01-28 00:46:24.480639 | orchestrator | ok: [testbed-node-4] => { 2026-01-28 00:46:24.480650 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-28 00:46:24.480662 | orchestrator | } 2026-01-28 00:46:24.480673 | orchestrator | 2026-01-28 00:46:24.480684 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-28 00:46:24.480694 | orchestrator | Wednesday 28 January 2026 00:46:21 +0000 (0:00:00.145) 0:00:44.024 ***** 2026-01-28 00:46:24.480705 | orchestrator | ok: [testbed-node-4] => { 2026-01-28 00:46:24.480716 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-28 00:46:24.480726 | orchestrator | } 2026-01-28 00:46:24.480737 | orchestrator | 2026-01-28 00:46:24.480748 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-28 00:46:24.480759 | orchestrator | Wednesday 28 January 2026 00:46:21 +0000 (0:00:00.143) 0:00:44.167 ***** 2026-01-28 00:46:24.480780 | orchestrator | ok: [testbed-node-4] => { 2026-01-28 00:46:24.480791 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-28 00:46:24.480802 | orchestrator | } 2026-01-28 00:46:24.480813 | orchestrator | 2026-01-28 00:46:24.480824 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-28 00:46:24.480835 | orchestrator | Wednesday 28 January 2026 00:46:21 +0000 (0:00:00.316) 0:00:44.484 ***** 2026-01-28 00:46:24.480845 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:46:24.480856 | orchestrator | 2026-01-28 00:46:24.480867 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-28 00:46:24.480883 | orchestrator | Wednesday 28 January 2026 00:46:22 +0000 (0:00:00.536) 0:00:45.020 ***** 2026-01-28 00:46:24.480894 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:46:24.480905 | orchestrator | 2026-01-28 00:46:24.480916 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-28 00:46:24.480926 | orchestrator | Wednesday 28 January 2026 00:46:22 +0000 (0:00:00.504) 0:00:45.525 ***** 2026-01-28 00:46:24.480937 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:46:24.480948 | orchestrator | 2026-01-28 00:46:24.480958 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-28 00:46:24.480972 | orchestrator | Wednesday 28 January 2026 00:46:23 +0000 (0:00:00.563) 0:00:46.088 ***** 2026-01-28 00:46:24.480990 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:46:24.481007 | orchestrator | 2026-01-28 00:46:24.481026 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-28 00:46:24.481044 | orchestrator | Wednesday 28 January 2026 00:46:23 +0000 (0:00:00.134) 0:00:46.222 ***** 2026-01-28 00:46:24.481061 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.481078 | orchestrator | 2026-01-28 00:46:24.481095 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-28 00:46:24.481112 | orchestrator | Wednesday 28 January 2026 00:46:23 +0000 (0:00:00.109) 0:00:46.331 ***** 2026-01-28 00:46:24.481155 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.481173 | orchestrator | 2026-01-28 00:46:24.481190 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-28 00:46:24.481207 | orchestrator | Wednesday 28 January 2026 00:46:23 +0000 (0:00:00.110) 0:00:46.442 ***** 2026-01-28 00:46:24.481224 | orchestrator | ok: [testbed-node-4] => { 2026-01-28 00:46:24.481243 | orchestrator |  "vgs_report": { 2026-01-28 00:46:24.481262 | orchestrator |  "vg": [] 2026-01-28 00:46:24.481279 | orchestrator |  } 2026-01-28 00:46:24.481298 | orchestrator | } 2026-01-28 00:46:24.481316 | orchestrator | 2026-01-28 00:46:24.481334 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-28 00:46:24.481352 | orchestrator | Wednesday 28 January 2026 00:46:23 +0000 (0:00:00.139) 0:00:46.582 ***** 2026-01-28 00:46:24.481371 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.481389 | orchestrator | 2026-01-28 00:46:24.481407 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-28 00:46:24.481425 | orchestrator | Wednesday 28 January 2026 00:46:24 +0000 (0:00:00.164) 0:00:46.746 ***** 2026-01-28 00:46:24.481442 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.481459 | orchestrator | 2026-01-28 00:46:24.481478 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-28 00:46:24.481495 | orchestrator | Wednesday 28 January 2026 00:46:24 +0000 (0:00:00.126) 0:00:46.873 ***** 2026-01-28 00:46:24.481513 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.481531 | orchestrator | 2026-01-28 00:46:24.481548 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-28 00:46:24.481567 | orchestrator | Wednesday 28 January 2026 00:46:24 +0000 (0:00:00.133) 0:00:47.007 ***** 2026-01-28 00:46:24.481586 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:24.481605 | orchestrator | 2026-01-28 00:46:24.481639 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-28 00:46:29.063531 | orchestrator | Wednesday 28 January 2026 00:46:24 +0000 (0:00:00.157) 0:00:47.165 ***** 2026-01-28 00:46:29.063646 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.063657 | orchestrator | 2026-01-28 00:46:29.063665 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-28 00:46:29.063672 | orchestrator | Wednesday 28 January 2026 00:46:24 +0000 (0:00:00.316) 0:00:47.483 ***** 2026-01-28 00:46:29.063679 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.063686 | orchestrator | 2026-01-28 00:46:29.063692 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-28 00:46:29.063699 | orchestrator | Wednesday 28 January 2026 00:46:24 +0000 (0:00:00.144) 0:00:47.627 ***** 2026-01-28 00:46:29.063706 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.063713 | orchestrator | 2026-01-28 00:46:29.063719 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-28 00:46:29.063726 | orchestrator | Wednesday 28 January 2026 00:46:25 +0000 (0:00:00.133) 0:00:47.760 ***** 2026-01-28 00:46:29.063733 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.063740 | orchestrator | 2026-01-28 00:46:29.063746 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-28 00:46:29.063753 | orchestrator | Wednesday 28 January 2026 00:46:25 +0000 (0:00:00.135) 0:00:47.896 ***** 2026-01-28 00:46:29.063760 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.063767 | orchestrator | 2026-01-28 00:46:29.063774 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-28 00:46:29.063780 | orchestrator | Wednesday 28 January 2026 00:46:25 +0000 (0:00:00.143) 0:00:48.040 ***** 2026-01-28 00:46:29.063787 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.063794 | orchestrator | 2026-01-28 00:46:29.063801 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-28 00:46:29.063807 | orchestrator | Wednesday 28 January 2026 00:46:25 +0000 (0:00:00.154) 0:00:48.195 ***** 2026-01-28 00:46:29.063814 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.063820 | orchestrator | 2026-01-28 00:46:29.063827 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-28 00:46:29.063833 | orchestrator | Wednesday 28 January 2026 00:46:25 +0000 (0:00:00.130) 0:00:48.325 ***** 2026-01-28 00:46:29.063840 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.063847 | orchestrator | 2026-01-28 00:46:29.063853 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-28 00:46:29.063860 | orchestrator | Wednesday 28 January 2026 00:46:25 +0000 (0:00:00.128) 0:00:48.453 ***** 2026-01-28 00:46:29.063866 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.063873 | orchestrator | 2026-01-28 00:46:29.063880 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-28 00:46:29.063886 | orchestrator | Wednesday 28 January 2026 00:46:25 +0000 (0:00:00.161) 0:00:48.614 ***** 2026-01-28 00:46:29.063893 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.063899 | orchestrator | 2026-01-28 00:46:29.063906 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-28 00:46:29.063913 | orchestrator | Wednesday 28 January 2026 00:46:26 +0000 (0:00:00.141) 0:00:48.755 ***** 2026-01-28 00:46:29.063921 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:29.063930 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:29.063937 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.063944 | orchestrator | 2026-01-28 00:46:29.063951 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-28 00:46:29.063957 | orchestrator | Wednesday 28 January 2026 00:46:26 +0000 (0:00:00.152) 0:00:48.908 ***** 2026-01-28 00:46:29.063964 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:29.063977 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:29.063984 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.063990 | orchestrator | 2026-01-28 00:46:29.063997 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-28 00:46:29.064004 | orchestrator | Wednesday 28 January 2026 00:46:26 +0000 (0:00:00.162) 0:00:49.071 ***** 2026-01-28 00:46:29.064011 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:29.064018 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:29.064025 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.064031 | orchestrator | 2026-01-28 00:46:29.064038 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-28 00:46:29.064045 | orchestrator | Wednesday 28 January 2026 00:46:26 +0000 (0:00:00.305) 0:00:49.377 ***** 2026-01-28 00:46:29.064051 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:29.064058 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:29.064065 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.064072 | orchestrator | 2026-01-28 00:46:29.064093 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-28 00:46:29.064101 | orchestrator | Wednesday 28 January 2026 00:46:26 +0000 (0:00:00.150) 0:00:49.528 ***** 2026-01-28 00:46:29.064109 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:29.064116 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:29.064124 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.064165 | orchestrator | 2026-01-28 00:46:29.064172 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-28 00:46:29.064180 | orchestrator | Wednesday 28 January 2026 00:46:27 +0000 (0:00:00.167) 0:00:49.696 ***** 2026-01-28 00:46:29.064187 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:29.064195 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:29.064203 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.064210 | orchestrator | 2026-01-28 00:46:29.064217 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-28 00:46:29.064224 | orchestrator | Wednesday 28 January 2026 00:46:27 +0000 (0:00:00.175) 0:00:49.871 ***** 2026-01-28 00:46:29.064278 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:29.064286 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:29.064294 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.064301 | orchestrator | 2026-01-28 00:46:29.064309 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-28 00:46:29.064317 | orchestrator | Wednesday 28 January 2026 00:46:27 +0000 (0:00:00.161) 0:00:50.033 ***** 2026-01-28 00:46:29.064324 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:29.064338 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:29.064348 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.064355 | orchestrator | 2026-01-28 00:46:29.064363 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-28 00:46:29.064370 | orchestrator | Wednesday 28 January 2026 00:46:27 +0000 (0:00:00.155) 0:00:50.189 ***** 2026-01-28 00:46:29.064377 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:46:29.064385 | orchestrator | 2026-01-28 00:46:29.064393 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-28 00:46:29.064400 | orchestrator | Wednesday 28 January 2026 00:46:27 +0000 (0:00:00.499) 0:00:50.688 ***** 2026-01-28 00:46:29.064407 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:46:29.064415 | orchestrator | 2026-01-28 00:46:29.064422 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-28 00:46:29.064429 | orchestrator | Wednesday 28 January 2026 00:46:28 +0000 (0:00:00.459) 0:00:51.148 ***** 2026-01-28 00:46:29.064437 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:46:29.064444 | orchestrator | 2026-01-28 00:46:29.064450 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-28 00:46:29.064457 | orchestrator | Wednesday 28 January 2026 00:46:28 +0000 (0:00:00.142) 0:00:51.290 ***** 2026-01-28 00:46:29.064464 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'vg_name': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'}) 2026-01-28 00:46:29.064472 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'vg_name': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'}) 2026-01-28 00:46:29.064478 | orchestrator | 2026-01-28 00:46:29.064485 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-28 00:46:29.064492 | orchestrator | Wednesday 28 January 2026 00:46:28 +0000 (0:00:00.158) 0:00:51.449 ***** 2026-01-28 00:46:29.064499 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:29.064505 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:29.064512 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:29.064519 | orchestrator | 2026-01-28 00:46:29.064526 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-28 00:46:29.064532 | orchestrator | Wednesday 28 January 2026 00:46:28 +0000 (0:00:00.160) 0:00:51.609 ***** 2026-01-28 00:46:29.064539 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:29.064553 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:34.942525 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:34.942612 | orchestrator | 2026-01-28 00:46:34.942623 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-28 00:46:34.942632 | orchestrator | Wednesday 28 January 2026 00:46:29 +0000 (0:00:00.139) 0:00:51.749 ***** 2026-01-28 00:46:34.942639 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'})  2026-01-28 00:46:34.942647 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'})  2026-01-28 00:46:34.942654 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:46:34.942660 | orchestrator | 2026-01-28 00:46:34.942667 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-28 00:46:34.942689 | orchestrator | Wednesday 28 January 2026 00:46:29 +0000 (0:00:00.174) 0:00:51.923 ***** 2026-01-28 00:46:34.942697 | orchestrator | ok: [testbed-node-4] => { 2026-01-28 00:46:34.942703 | orchestrator |  "lvm_report": { 2026-01-28 00:46:34.942711 | orchestrator |  "lv": [ 2026-01-28 00:46:34.942717 | orchestrator |  { 2026-01-28 00:46:34.942723 | orchestrator |  "lv_name": "osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4", 2026-01-28 00:46:34.942731 | orchestrator |  "vg_name": "ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4" 2026-01-28 00:46:34.942737 | orchestrator |  }, 2026-01-28 00:46:34.942743 | orchestrator |  { 2026-01-28 00:46:34.942749 | orchestrator |  "lv_name": "osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e", 2026-01-28 00:46:34.942756 | orchestrator |  "vg_name": "ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e" 2026-01-28 00:46:34.942762 | orchestrator |  } 2026-01-28 00:46:34.942768 | orchestrator |  ], 2026-01-28 00:46:34.942774 | orchestrator |  "pv": [ 2026-01-28 00:46:34.942780 | orchestrator |  { 2026-01-28 00:46:34.942786 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-28 00:46:34.942792 | orchestrator |  "vg_name": "ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e" 2026-01-28 00:46:34.942798 | orchestrator |  }, 2026-01-28 00:46:34.942804 | orchestrator |  { 2026-01-28 00:46:34.942810 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-28 00:46:34.942816 | orchestrator |  "vg_name": "ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4" 2026-01-28 00:46:34.942822 | orchestrator |  } 2026-01-28 00:46:34.942828 | orchestrator |  ] 2026-01-28 00:46:34.942834 | orchestrator |  } 2026-01-28 00:46:34.942841 | orchestrator | } 2026-01-28 00:46:34.942847 | orchestrator | 2026-01-28 00:46:34.942853 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-01-28 00:46:34.942859 | orchestrator | 2026-01-28 00:46:34.942865 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-01-28 00:46:34.942871 | orchestrator | Wednesday 28 January 2026 00:46:29 +0000 (0:00:00.456) 0:00:52.380 ***** 2026-01-28 00:46:34.942889 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-01-28 00:46:34.942896 | orchestrator | 2026-01-28 00:46:34.942902 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-01-28 00:46:34.942909 | orchestrator | Wednesday 28 January 2026 00:46:29 +0000 (0:00:00.251) 0:00:52.631 ***** 2026-01-28 00:46:34.942915 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:46:34.942921 | orchestrator | 2026-01-28 00:46:34.942927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:34.942933 | orchestrator | Wednesday 28 January 2026 00:46:30 +0000 (0:00:00.252) 0:00:52.884 ***** 2026-01-28 00:46:34.942939 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-01-28 00:46:34.942945 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-01-28 00:46:34.942951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-01-28 00:46:34.942957 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-01-28 00:46:34.942964 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-01-28 00:46:34.942969 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-01-28 00:46:34.942975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-01-28 00:46:34.942981 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-01-28 00:46:34.942988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-01-28 00:46:34.942994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-01-28 00:46:34.943005 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-01-28 00:46:34.943011 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-01-28 00:46:34.943017 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-01-28 00:46:34.943023 | orchestrator | 2026-01-28 00:46:34.943029 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:34.943038 | orchestrator | Wednesday 28 January 2026 00:46:30 +0000 (0:00:00.399) 0:00:53.284 ***** 2026-01-28 00:46:34.943044 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:34.943050 | orchestrator | 2026-01-28 00:46:34.943056 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:34.943062 | orchestrator | Wednesday 28 January 2026 00:46:30 +0000 (0:00:00.214) 0:00:53.498 ***** 2026-01-28 00:46:34.943068 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:34.943074 | orchestrator | 2026-01-28 00:46:34.943080 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:34.943099 | orchestrator | Wednesday 28 January 2026 00:46:31 +0000 (0:00:00.223) 0:00:53.721 ***** 2026-01-28 00:46:34.943106 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:34.943113 | orchestrator | 2026-01-28 00:46:34.943120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:34.943147 | orchestrator | Wednesday 28 January 2026 00:46:31 +0000 (0:00:00.181) 0:00:53.903 ***** 2026-01-28 00:46:34.943155 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:34.943162 | orchestrator | 2026-01-28 00:46:34.943169 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:34.943176 | orchestrator | Wednesday 28 January 2026 00:46:31 +0000 (0:00:00.180) 0:00:54.084 ***** 2026-01-28 00:46:34.943183 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:34.943190 | orchestrator | 2026-01-28 00:46:34.943197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:34.943204 | orchestrator | Wednesday 28 January 2026 00:46:31 +0000 (0:00:00.539) 0:00:54.623 ***** 2026-01-28 00:46:34.943211 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:34.943218 | orchestrator | 2026-01-28 00:46:34.943225 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:34.943232 | orchestrator | Wednesday 28 January 2026 00:46:32 +0000 (0:00:00.200) 0:00:54.824 ***** 2026-01-28 00:46:34.943239 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:34.943246 | orchestrator | 2026-01-28 00:46:34.943253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:34.943260 | orchestrator | Wednesday 28 January 2026 00:46:32 +0000 (0:00:00.227) 0:00:55.052 ***** 2026-01-28 00:46:34.943266 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:34.943273 | orchestrator | 2026-01-28 00:46:34.943280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:34.943287 | orchestrator | Wednesday 28 January 2026 00:46:32 +0000 (0:00:00.197) 0:00:55.249 ***** 2026-01-28 00:46:34.943294 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2) 2026-01-28 00:46:34.943303 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2) 2026-01-28 00:46:34.943309 | orchestrator | 2026-01-28 00:46:34.943316 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:34.943323 | orchestrator | Wednesday 28 January 2026 00:46:32 +0000 (0:00:00.394) 0:00:55.644 ***** 2026-01-28 00:46:34.943331 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3b42e8f1-b37c-4f60-8295-3641607d148d) 2026-01-28 00:46:34.943338 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3b42e8f1-b37c-4f60-8295-3641607d148d) 2026-01-28 00:46:34.943345 | orchestrator | 2026-01-28 00:46:34.943352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:34.943368 | orchestrator | Wednesday 28 January 2026 00:46:33 +0000 (0:00:00.433) 0:00:56.077 ***** 2026-01-28 00:46:34.943375 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_aa542874-8b0a-406e-9706-56af76962c37) 2026-01-28 00:46:34.943382 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_aa542874-8b0a-406e-9706-56af76962c37) 2026-01-28 00:46:34.943389 | orchestrator | 2026-01-28 00:46:34.943396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:34.943403 | orchestrator | Wednesday 28 January 2026 00:46:33 +0000 (0:00:00.390) 0:00:56.468 ***** 2026-01-28 00:46:34.943410 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_28bc48d4-f1f3-45fc-825f-eba8771d5ae9) 2026-01-28 00:46:34.943417 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_28bc48d4-f1f3-45fc-825f-eba8771d5ae9) 2026-01-28 00:46:34.943424 | orchestrator | 2026-01-28 00:46:34.943431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-01-28 00:46:34.943438 | orchestrator | Wednesday 28 January 2026 00:46:34 +0000 (0:00:00.402) 0:00:56.870 ***** 2026-01-28 00:46:34.943445 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-01-28 00:46:34.943452 | orchestrator | 2026-01-28 00:46:34.943458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:34.943464 | orchestrator | Wednesday 28 January 2026 00:46:34 +0000 (0:00:00.345) 0:00:57.215 ***** 2026-01-28 00:46:34.943470 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-01-28 00:46:34.943476 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-01-28 00:46:34.943482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-01-28 00:46:34.943488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-01-28 00:46:34.943494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-01-28 00:46:34.943500 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-01-28 00:46:34.943506 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-01-28 00:46:34.943512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-01-28 00:46:34.943518 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-01-28 00:46:34.943524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-01-28 00:46:34.943530 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-01-28 00:46:34.943540 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-01-28 00:46:43.701666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-01-28 00:46:43.701785 | orchestrator | 2026-01-28 00:46:43.701811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:43.701833 | orchestrator | Wednesday 28 January 2026 00:46:34 +0000 (0:00:00.406) 0:00:57.622 ***** 2026-01-28 00:46:43.701855 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.701876 | orchestrator | 2026-01-28 00:46:43.701896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:43.701913 | orchestrator | Wednesday 28 January 2026 00:46:35 +0000 (0:00:00.191) 0:00:57.813 ***** 2026-01-28 00:46:43.701924 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.701936 | orchestrator | 2026-01-28 00:46:43.701953 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:43.701971 | orchestrator | Wednesday 28 January 2026 00:46:35 +0000 (0:00:00.500) 0:00:58.314 ***** 2026-01-28 00:46:43.701990 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.702108 | orchestrator | 2026-01-28 00:46:43.702181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:43.702198 | orchestrator | Wednesday 28 January 2026 00:46:35 +0000 (0:00:00.218) 0:00:58.533 ***** 2026-01-28 00:46:43.702210 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.702223 | orchestrator | 2026-01-28 00:46:43.702237 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:43.702256 | orchestrator | Wednesday 28 January 2026 00:46:36 +0000 (0:00:00.201) 0:00:58.734 ***** 2026-01-28 00:46:43.702275 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.702293 | orchestrator | 2026-01-28 00:46:43.702310 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:43.702330 | orchestrator | Wednesday 28 January 2026 00:46:36 +0000 (0:00:00.212) 0:00:58.947 ***** 2026-01-28 00:46:43.702345 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.702356 | orchestrator | 2026-01-28 00:46:43.702367 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:43.702398 | orchestrator | Wednesday 28 January 2026 00:46:36 +0000 (0:00:00.223) 0:00:59.170 ***** 2026-01-28 00:46:43.702409 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.702420 | orchestrator | 2026-01-28 00:46:43.702430 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:43.702441 | orchestrator | Wednesday 28 January 2026 00:46:36 +0000 (0:00:00.194) 0:00:59.365 ***** 2026-01-28 00:46:43.702459 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.702478 | orchestrator | 2026-01-28 00:46:43.702498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:43.702519 | orchestrator | Wednesday 28 January 2026 00:46:36 +0000 (0:00:00.187) 0:00:59.553 ***** 2026-01-28 00:46:43.702538 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-01-28 00:46:43.702550 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-01-28 00:46:43.702561 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-01-28 00:46:43.702572 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-01-28 00:46:43.702583 | orchestrator | 2026-01-28 00:46:43.702593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:43.702604 | orchestrator | Wednesday 28 January 2026 00:46:37 +0000 (0:00:00.640) 0:01:00.193 ***** 2026-01-28 00:46:43.702615 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.702625 | orchestrator | 2026-01-28 00:46:43.702636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:43.702647 | orchestrator | Wednesday 28 January 2026 00:46:37 +0000 (0:00:00.219) 0:01:00.413 ***** 2026-01-28 00:46:43.702658 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.702669 | orchestrator | 2026-01-28 00:46:43.702679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:43.702690 | orchestrator | Wednesday 28 January 2026 00:46:37 +0000 (0:00:00.207) 0:01:00.620 ***** 2026-01-28 00:46:43.702705 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.702722 | orchestrator | 2026-01-28 00:46:43.702741 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-01-28 00:46:43.702755 | orchestrator | Wednesday 28 January 2026 00:46:38 +0000 (0:00:00.164) 0:01:00.785 ***** 2026-01-28 00:46:43.702765 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.702776 | orchestrator | 2026-01-28 00:46:43.702787 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-01-28 00:46:43.702797 | orchestrator | Wednesday 28 January 2026 00:46:38 +0000 (0:00:00.213) 0:01:00.998 ***** 2026-01-28 00:46:43.702808 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.702818 | orchestrator | 2026-01-28 00:46:43.702829 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-01-28 00:46:43.702840 | orchestrator | Wednesday 28 January 2026 00:46:38 +0000 (0:00:00.317) 0:01:01.316 ***** 2026-01-28 00:46:43.702850 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'}}) 2026-01-28 00:46:43.702873 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'}}) 2026-01-28 00:46:43.702884 | orchestrator | 2026-01-28 00:46:43.702894 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-01-28 00:46:43.702905 | orchestrator | Wednesday 28 January 2026 00:46:38 +0000 (0:00:00.214) 0:01:01.530 ***** 2026-01-28 00:46:43.702918 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'}) 2026-01-28 00:46:43.702949 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'}) 2026-01-28 00:46:43.702960 | orchestrator | 2026-01-28 00:46:43.702971 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-01-28 00:46:43.703001 | orchestrator | Wednesday 28 January 2026 00:46:40 +0000 (0:00:01.930) 0:01:03.461 ***** 2026-01-28 00:46:43.703013 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:43.703025 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:43.703036 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.703047 | orchestrator | 2026-01-28 00:46:43.703057 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-01-28 00:46:43.703068 | orchestrator | Wednesday 28 January 2026 00:46:40 +0000 (0:00:00.136) 0:01:03.597 ***** 2026-01-28 00:46:43.703079 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'}) 2026-01-28 00:46:43.703093 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'}) 2026-01-28 00:46:43.703112 | orchestrator | 2026-01-28 00:46:43.703156 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-01-28 00:46:43.703177 | orchestrator | Wednesday 28 January 2026 00:46:42 +0000 (0:00:01.350) 0:01:04.947 ***** 2026-01-28 00:46:43.703196 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:43.703215 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:43.703233 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.703252 | orchestrator | 2026-01-28 00:46:43.703285 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-01-28 00:46:43.703306 | orchestrator | Wednesday 28 January 2026 00:46:42 +0000 (0:00:00.193) 0:01:05.141 ***** 2026-01-28 00:46:43.703320 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.703340 | orchestrator | 2026-01-28 00:46:43.703360 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-01-28 00:46:43.703371 | orchestrator | Wednesday 28 January 2026 00:46:42 +0000 (0:00:00.146) 0:01:05.288 ***** 2026-01-28 00:46:43.703385 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:43.703411 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:43.703431 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.703449 | orchestrator | 2026-01-28 00:46:43.703467 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-01-28 00:46:43.703478 | orchestrator | Wednesday 28 January 2026 00:46:42 +0000 (0:00:00.143) 0:01:05.431 ***** 2026-01-28 00:46:43.703498 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.703509 | orchestrator | 2026-01-28 00:46:43.703519 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-01-28 00:46:43.703530 | orchestrator | Wednesday 28 January 2026 00:46:42 +0000 (0:00:00.128) 0:01:05.559 ***** 2026-01-28 00:46:43.703541 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:43.703552 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:43.703563 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.703574 | orchestrator | 2026-01-28 00:46:43.703592 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-01-28 00:46:43.703610 | orchestrator | Wednesday 28 January 2026 00:46:43 +0000 (0:00:00.138) 0:01:05.698 ***** 2026-01-28 00:46:43.703629 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.703640 | orchestrator | 2026-01-28 00:46:43.703656 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-01-28 00:46:43.703673 | orchestrator | Wednesday 28 January 2026 00:46:43 +0000 (0:00:00.135) 0:01:05.833 ***** 2026-01-28 00:46:43.703684 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:43.703695 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:43.703706 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:43.703716 | orchestrator | 2026-01-28 00:46:43.703727 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-01-28 00:46:43.703758 | orchestrator | Wednesday 28 January 2026 00:46:43 +0000 (0:00:00.142) 0:01:05.976 ***** 2026-01-28 00:46:43.703777 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:46:43.703798 | orchestrator | 2026-01-28 00:46:43.703817 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-01-28 00:46:43.703832 | orchestrator | Wednesday 28 January 2026 00:46:43 +0000 (0:00:00.265) 0:01:06.241 ***** 2026-01-28 00:46:43.703853 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:49.546581 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:49.546680 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.546694 | orchestrator | 2026-01-28 00:46:49.546704 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-01-28 00:46:49.546716 | orchestrator | Wednesday 28 January 2026 00:46:43 +0000 (0:00:00.147) 0:01:06.389 ***** 2026-01-28 00:46:49.546726 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:49.546735 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:49.546745 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.546754 | orchestrator | 2026-01-28 00:46:49.546763 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-01-28 00:46:49.546772 | orchestrator | Wednesday 28 January 2026 00:46:43 +0000 (0:00:00.170) 0:01:06.560 ***** 2026-01-28 00:46:49.546782 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:49.546789 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:49.546817 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.546826 | orchestrator | 2026-01-28 00:46:49.546835 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-01-28 00:46:49.546843 | orchestrator | Wednesday 28 January 2026 00:46:44 +0000 (0:00:00.138) 0:01:06.698 ***** 2026-01-28 00:46:49.546850 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.546858 | orchestrator | 2026-01-28 00:46:49.546866 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-01-28 00:46:49.546874 | orchestrator | Wednesday 28 January 2026 00:46:44 +0000 (0:00:00.128) 0:01:06.827 ***** 2026-01-28 00:46:49.546883 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.546892 | orchestrator | 2026-01-28 00:46:49.546901 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-01-28 00:46:49.546910 | orchestrator | Wednesday 28 January 2026 00:46:44 +0000 (0:00:00.129) 0:01:06.956 ***** 2026-01-28 00:46:49.546918 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.546927 | orchestrator | 2026-01-28 00:46:49.546949 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-01-28 00:46:49.546958 | orchestrator | Wednesday 28 January 2026 00:46:44 +0000 (0:00:00.123) 0:01:07.080 ***** 2026-01-28 00:46:49.546968 | orchestrator | ok: [testbed-node-5] => { 2026-01-28 00:46:49.546977 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-01-28 00:46:49.546987 | orchestrator | } 2026-01-28 00:46:49.546997 | orchestrator | 2026-01-28 00:46:49.547006 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-01-28 00:46:49.547015 | orchestrator | Wednesday 28 January 2026 00:46:44 +0000 (0:00:00.139) 0:01:07.220 ***** 2026-01-28 00:46:49.547024 | orchestrator | ok: [testbed-node-5] => { 2026-01-28 00:46:49.547033 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-01-28 00:46:49.547042 | orchestrator | } 2026-01-28 00:46:49.547052 | orchestrator | 2026-01-28 00:46:49.547061 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-01-28 00:46:49.547070 | orchestrator | Wednesday 28 January 2026 00:46:44 +0000 (0:00:00.128) 0:01:07.348 ***** 2026-01-28 00:46:49.547079 | orchestrator | ok: [testbed-node-5] => { 2026-01-28 00:46:49.547088 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-01-28 00:46:49.547097 | orchestrator | } 2026-01-28 00:46:49.547106 | orchestrator | 2026-01-28 00:46:49.547116 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-01-28 00:46:49.547144 | orchestrator | Wednesday 28 January 2026 00:46:44 +0000 (0:00:00.134) 0:01:07.483 ***** 2026-01-28 00:46:49.547153 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:46:49.547161 | orchestrator | 2026-01-28 00:46:49.547170 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-01-28 00:46:49.547178 | orchestrator | Wednesday 28 January 2026 00:46:45 +0000 (0:00:00.542) 0:01:08.025 ***** 2026-01-28 00:46:49.547188 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:46:49.547197 | orchestrator | 2026-01-28 00:46:49.547208 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-01-28 00:46:49.547218 | orchestrator | Wednesday 28 January 2026 00:46:45 +0000 (0:00:00.550) 0:01:08.576 ***** 2026-01-28 00:46:49.547228 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:46:49.547238 | orchestrator | 2026-01-28 00:46:49.547248 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-01-28 00:46:49.547258 | orchestrator | Wednesday 28 January 2026 00:46:46 +0000 (0:00:00.674) 0:01:09.250 ***** 2026-01-28 00:46:49.547267 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:46:49.547276 | orchestrator | 2026-01-28 00:46:49.547285 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-01-28 00:46:49.547294 | orchestrator | Wednesday 28 January 2026 00:46:46 +0000 (0:00:00.162) 0:01:09.413 ***** 2026-01-28 00:46:49.547303 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.547312 | orchestrator | 2026-01-28 00:46:49.547321 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-01-28 00:46:49.547340 | orchestrator | Wednesday 28 January 2026 00:46:46 +0000 (0:00:00.112) 0:01:09.525 ***** 2026-01-28 00:46:49.547408 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.547420 | orchestrator | 2026-01-28 00:46:49.547430 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-01-28 00:46:49.547439 | orchestrator | Wednesday 28 January 2026 00:46:46 +0000 (0:00:00.123) 0:01:09.649 ***** 2026-01-28 00:46:49.547448 | orchestrator | ok: [testbed-node-5] => { 2026-01-28 00:46:49.547457 | orchestrator |  "vgs_report": { 2026-01-28 00:46:49.547467 | orchestrator |  "vg": [] 2026-01-28 00:46:49.547494 | orchestrator |  } 2026-01-28 00:46:49.547504 | orchestrator | } 2026-01-28 00:46:49.547513 | orchestrator | 2026-01-28 00:46:49.547522 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-01-28 00:46:49.547532 | orchestrator | Wednesday 28 January 2026 00:46:47 +0000 (0:00:00.142) 0:01:09.791 ***** 2026-01-28 00:46:49.547541 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.547550 | orchestrator | 2026-01-28 00:46:49.547559 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-01-28 00:46:49.547568 | orchestrator | Wednesday 28 January 2026 00:46:47 +0000 (0:00:00.135) 0:01:09.927 ***** 2026-01-28 00:46:49.547577 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.547586 | orchestrator | 2026-01-28 00:46:49.547595 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-01-28 00:46:49.547604 | orchestrator | Wednesday 28 January 2026 00:46:47 +0000 (0:00:00.149) 0:01:10.076 ***** 2026-01-28 00:46:49.547613 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.547622 | orchestrator | 2026-01-28 00:46:49.547631 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-01-28 00:46:49.547640 | orchestrator | Wednesday 28 January 2026 00:46:47 +0000 (0:00:00.125) 0:01:10.201 ***** 2026-01-28 00:46:49.547649 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.547658 | orchestrator | 2026-01-28 00:46:49.547667 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-01-28 00:46:49.547676 | orchestrator | Wednesday 28 January 2026 00:46:47 +0000 (0:00:00.134) 0:01:10.336 ***** 2026-01-28 00:46:49.547684 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.547693 | orchestrator | 2026-01-28 00:46:49.547701 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-01-28 00:46:49.547710 | orchestrator | Wednesday 28 January 2026 00:46:47 +0000 (0:00:00.137) 0:01:10.474 ***** 2026-01-28 00:46:49.547719 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.547728 | orchestrator | 2026-01-28 00:46:49.547736 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-01-28 00:46:49.547745 | orchestrator | Wednesday 28 January 2026 00:46:47 +0000 (0:00:00.126) 0:01:10.600 ***** 2026-01-28 00:46:49.547754 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.547762 | orchestrator | 2026-01-28 00:46:49.547771 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-01-28 00:46:49.547779 | orchestrator | Wednesday 28 January 2026 00:46:48 +0000 (0:00:00.130) 0:01:10.731 ***** 2026-01-28 00:46:49.547788 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.547797 | orchestrator | 2026-01-28 00:46:49.547805 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-01-28 00:46:49.547814 | orchestrator | Wednesday 28 January 2026 00:46:48 +0000 (0:00:00.273) 0:01:11.004 ***** 2026-01-28 00:46:49.547823 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.547831 | orchestrator | 2026-01-28 00:46:49.547846 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-01-28 00:46:49.547855 | orchestrator | Wednesday 28 January 2026 00:46:48 +0000 (0:00:00.115) 0:01:11.119 ***** 2026-01-28 00:46:49.547864 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.547872 | orchestrator | 2026-01-28 00:46:49.547881 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-01-28 00:46:49.547890 | orchestrator | Wednesday 28 January 2026 00:46:48 +0000 (0:00:00.133) 0:01:11.253 ***** 2026-01-28 00:46:49.547905 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.547912 | orchestrator | 2026-01-28 00:46:49.547920 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-01-28 00:46:49.547927 | orchestrator | Wednesday 28 January 2026 00:46:48 +0000 (0:00:00.126) 0:01:11.380 ***** 2026-01-28 00:46:49.547934 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.547942 | orchestrator | 2026-01-28 00:46:49.547950 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-01-28 00:46:49.547958 | orchestrator | Wednesday 28 January 2026 00:46:48 +0000 (0:00:00.144) 0:01:11.524 ***** 2026-01-28 00:46:49.547965 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.547973 | orchestrator | 2026-01-28 00:46:49.547982 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-01-28 00:46:49.547991 | orchestrator | Wednesday 28 January 2026 00:46:48 +0000 (0:00:00.160) 0:01:11.684 ***** 2026-01-28 00:46:49.547999 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.548007 | orchestrator | 2026-01-28 00:46:49.548016 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-01-28 00:46:49.548025 | orchestrator | Wednesday 28 January 2026 00:46:49 +0000 (0:00:00.129) 0:01:11.814 ***** 2026-01-28 00:46:49.548034 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:49.548043 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:49.548052 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.548060 | orchestrator | 2026-01-28 00:46:49.548069 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-01-28 00:46:49.548077 | orchestrator | Wednesday 28 January 2026 00:46:49 +0000 (0:00:00.144) 0:01:11.958 ***** 2026-01-28 00:46:49.548085 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:49.548094 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:49.548103 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:49.548111 | orchestrator | 2026-01-28 00:46:49.548120 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-01-28 00:46:49.548144 | orchestrator | Wednesday 28 January 2026 00:46:49 +0000 (0:00:00.142) 0:01:12.100 ***** 2026-01-28 00:46:49.548160 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:52.466834 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:52.466955 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:52.466982 | orchestrator | 2026-01-28 00:46:52.467005 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-01-28 00:46:52.467022 | orchestrator | Wednesday 28 January 2026 00:46:49 +0000 (0:00:00.133) 0:01:12.233 ***** 2026-01-28 00:46:52.467033 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:52.467045 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:52.467055 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:52.467066 | orchestrator | 2026-01-28 00:46:52.467077 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-01-28 00:46:52.467088 | orchestrator | Wednesday 28 January 2026 00:46:49 +0000 (0:00:00.144) 0:01:12.378 ***** 2026-01-28 00:46:52.467202 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:52.467230 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:52.467241 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:52.467252 | orchestrator | 2026-01-28 00:46:52.467263 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-01-28 00:46:52.467283 | orchestrator | Wednesday 28 January 2026 00:46:49 +0000 (0:00:00.142) 0:01:12.521 ***** 2026-01-28 00:46:52.467301 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:52.467320 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:52.467340 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:52.467358 | orchestrator | 2026-01-28 00:46:52.467377 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-01-28 00:46:52.467396 | orchestrator | Wednesday 28 January 2026 00:46:50 +0000 (0:00:00.273) 0:01:12.795 ***** 2026-01-28 00:46:52.467412 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:52.467425 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:52.467437 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:52.467450 | orchestrator | 2026-01-28 00:46:52.467462 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-01-28 00:46:52.467475 | orchestrator | Wednesday 28 January 2026 00:46:50 +0000 (0:00:00.133) 0:01:12.928 ***** 2026-01-28 00:46:52.467487 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:52.467499 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:52.467511 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:52.467523 | orchestrator | 2026-01-28 00:46:52.467535 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-01-28 00:46:52.467547 | orchestrator | Wednesday 28 January 2026 00:46:50 +0000 (0:00:00.143) 0:01:13.071 ***** 2026-01-28 00:46:52.467559 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:46:52.467572 | orchestrator | 2026-01-28 00:46:52.467584 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-01-28 00:46:52.467596 | orchestrator | Wednesday 28 January 2026 00:46:50 +0000 (0:00:00.517) 0:01:13.589 ***** 2026-01-28 00:46:52.467609 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:46:52.467621 | orchestrator | 2026-01-28 00:46:52.467633 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-01-28 00:46:52.467650 | orchestrator | Wednesday 28 January 2026 00:46:51 +0000 (0:00:00.624) 0:01:14.213 ***** 2026-01-28 00:46:52.467668 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:46:52.467686 | orchestrator | 2026-01-28 00:46:52.467705 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-01-28 00:46:52.467724 | orchestrator | Wednesday 28 January 2026 00:46:51 +0000 (0:00:00.140) 0:01:14.354 ***** 2026-01-28 00:46:52.467820 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'vg_name': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'}) 2026-01-28 00:46:52.467834 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'vg_name': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'}) 2026-01-28 00:46:52.467858 | orchestrator | 2026-01-28 00:46:52.467869 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-01-28 00:46:52.467880 | orchestrator | Wednesday 28 January 2026 00:46:51 +0000 (0:00:00.162) 0:01:14.516 ***** 2026-01-28 00:46:52.467932 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:52.467944 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:52.467955 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:52.467966 | orchestrator | 2026-01-28 00:46:52.467977 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-01-28 00:46:52.467988 | orchestrator | Wednesday 28 January 2026 00:46:52 +0000 (0:00:00.179) 0:01:14.695 ***** 2026-01-28 00:46:52.467999 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:52.468016 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:52.468035 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:52.468052 | orchestrator | 2026-01-28 00:46:52.468070 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-01-28 00:46:52.468089 | orchestrator | Wednesday 28 January 2026 00:46:52 +0000 (0:00:00.155) 0:01:14.850 ***** 2026-01-28 00:46:52.468108 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'})  2026-01-28 00:46:52.468149 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'})  2026-01-28 00:46:52.468162 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:46:52.468172 | orchestrator | 2026-01-28 00:46:52.468183 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-01-28 00:46:52.468194 | orchestrator | Wednesday 28 January 2026 00:46:52 +0000 (0:00:00.156) 0:01:15.007 ***** 2026-01-28 00:46:52.468205 | orchestrator | ok: [testbed-node-5] => { 2026-01-28 00:46:52.468216 | orchestrator |  "lvm_report": { 2026-01-28 00:46:52.468227 | orchestrator |  "lv": [ 2026-01-28 00:46:52.468238 | orchestrator |  { 2026-01-28 00:46:52.468250 | orchestrator |  "lv_name": "osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e", 2026-01-28 00:46:52.468268 | orchestrator |  "vg_name": "ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e" 2026-01-28 00:46:52.468279 | orchestrator |  }, 2026-01-28 00:46:52.468290 | orchestrator |  { 2026-01-28 00:46:52.468301 | orchestrator |  "lv_name": "osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6", 2026-01-28 00:46:52.468312 | orchestrator |  "vg_name": "ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6" 2026-01-28 00:46:52.468323 | orchestrator |  } 2026-01-28 00:46:52.468333 | orchestrator |  ], 2026-01-28 00:46:52.468344 | orchestrator |  "pv": [ 2026-01-28 00:46:52.468355 | orchestrator |  { 2026-01-28 00:46:52.468365 | orchestrator |  "pv_name": "/dev/sdb", 2026-01-28 00:46:52.468376 | orchestrator |  "vg_name": "ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e" 2026-01-28 00:46:52.468390 | orchestrator |  }, 2026-01-28 00:46:52.468409 | orchestrator |  { 2026-01-28 00:46:52.468426 | orchestrator |  "pv_name": "/dev/sdc", 2026-01-28 00:46:52.468445 | orchestrator |  "vg_name": "ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6" 2026-01-28 00:46:52.468464 | orchestrator |  } 2026-01-28 00:46:52.468483 | orchestrator |  ] 2026-01-28 00:46:52.468502 | orchestrator |  } 2026-01-28 00:46:52.468514 | orchestrator | } 2026-01-28 00:46:52.468534 | orchestrator | 2026-01-28 00:46:52.468545 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:46:52.468555 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-28 00:46:52.468566 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-28 00:46:52.468577 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-01-28 00:46:52.468588 | orchestrator | 2026-01-28 00:46:52.468599 | orchestrator | 2026-01-28 00:46:52.468609 | orchestrator | 2026-01-28 00:46:52.468620 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:46:52.468630 | orchestrator | Wednesday 28 January 2026 00:46:52 +0000 (0:00:00.123) 0:01:15.131 ***** 2026-01-28 00:46:52.468641 | orchestrator | =============================================================================== 2026-01-28 00:46:52.468652 | orchestrator | Create block VGs -------------------------------------------------------- 5.77s 2026-01-28 00:46:52.468663 | orchestrator | Create block LVs -------------------------------------------------------- 4.25s 2026-01-28 00:46:52.468673 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.78s 2026-01-28 00:46:52.468684 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.77s 2026-01-28 00:46:52.468695 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.64s 2026-01-28 00:46:52.468706 | orchestrator | Add known partitions to the list of available block devices ------------- 1.63s 2026-01-28 00:46:52.468716 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.61s 2026-01-28 00:46:52.468727 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.57s 2026-01-28 00:46:52.468747 | orchestrator | Add known links to the list of available block devices ------------------ 1.52s 2026-01-28 00:46:52.727994 | orchestrator | Add known partitions to the list of available block devices ------------- 1.21s 2026-01-28 00:46:52.728078 | orchestrator | Add known links to the list of available block devices ------------------ 1.04s 2026-01-28 00:46:52.728088 | orchestrator | Print LVM report data --------------------------------------------------- 0.87s 2026-01-28 00:46:52.728095 | orchestrator | Get initial list of available block devices ----------------------------- 0.81s 2026-01-28 00:46:52.728102 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2026-01-28 00:46:52.728109 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.79s 2026-01-28 00:46:52.728116 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.79s 2026-01-28 00:46:52.728122 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2026-01-28 00:46:52.728163 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2026-01-28 00:46:52.728174 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.78s 2026-01-28 00:46:52.728185 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2026-01-28 00:47:04.856878 | orchestrator | 2026-01-28 00:47:04 | INFO  | Task a6ea6554-37d0-49a0-bb50-ee98818f5f74 (facts) was prepared for execution. 2026-01-28 00:47:04.856970 | orchestrator | 2026-01-28 00:47:04 | INFO  | It takes a moment until task a6ea6554-37d0-49a0-bb50-ee98818f5f74 (facts) has been started and output is visible here. 2026-01-28 00:47:17.076820 | orchestrator | 2026-01-28 00:47:17.076964 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-01-28 00:47:17.076992 | orchestrator | 2026-01-28 00:47:17.077011 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-01-28 00:47:17.077030 | orchestrator | Wednesday 28 January 2026 00:47:09 +0000 (0:00:00.236) 0:00:00.236 ***** 2026-01-28 00:47:17.077087 | orchestrator | ok: [testbed-manager] 2026-01-28 00:47:17.077109 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:47:17.077162 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:47:17.077182 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:47:17.077202 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:47:17.077222 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:47:17.077241 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:47:17.077261 | orchestrator | 2026-01-28 00:47:17.077281 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-01-28 00:47:17.077321 | orchestrator | Wednesday 28 January 2026 00:47:10 +0000 (0:00:01.017) 0:00:01.254 ***** 2026-01-28 00:47:17.077344 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:47:17.077390 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:47:17.077429 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:47:17.077452 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:47:17.077473 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:47:17.077495 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:47:17.077519 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:47:17.077540 | orchestrator | 2026-01-28 00:47:17.077563 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-01-28 00:47:17.077584 | orchestrator | 2026-01-28 00:47:17.077605 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-01-28 00:47:17.077625 | orchestrator | Wednesday 28 January 2026 00:47:11 +0000 (0:00:01.096) 0:00:02.350 ***** 2026-01-28 00:47:17.077644 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:47:17.077663 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:47:17.077681 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:47:17.077699 | orchestrator | ok: [testbed-manager] 2026-01-28 00:47:17.077717 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:47:17.077735 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:47:17.077752 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:47:17.077771 | orchestrator | 2026-01-28 00:47:17.077789 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-01-28 00:47:17.077806 | orchestrator | 2026-01-28 00:47:17.077824 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-01-28 00:47:17.077842 | orchestrator | Wednesday 28 January 2026 00:47:15 +0000 (0:00:04.805) 0:00:07.156 ***** 2026-01-28 00:47:17.077862 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:47:17.077882 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:47:17.077901 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:47:17.077919 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:47:17.077936 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:47:17.077953 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:47:17.077970 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:47:17.077988 | orchestrator | 2026-01-28 00:47:17.078006 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:47:17.078172 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:47:17.078191 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:47:17.078202 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:47:17.078214 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:47:17.078225 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:47:17.078236 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:47:17.078247 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:47:17.078276 | orchestrator | 2026-01-28 00:47:17.078287 | orchestrator | 2026-01-28 00:47:17.078297 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:47:17.078308 | orchestrator | Wednesday 28 January 2026 00:47:16 +0000 (0:00:00.520) 0:00:07.676 ***** 2026-01-28 00:47:17.078319 | orchestrator | =============================================================================== 2026-01-28 00:47:17.078330 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.81s 2026-01-28 00:47:17.078341 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.10s 2026-01-28 00:47:17.078351 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.02s 2026-01-28 00:47:17.078362 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-01-28 00:47:29.782525 | orchestrator | 2026-01-28 00:47:29 | INFO  | Task a7ab152c-e4cf-45ad-ab69-558660d5c366 (frr) was prepared for execution. 2026-01-28 00:47:29.782638 | orchestrator | 2026-01-28 00:47:29 | INFO  | It takes a moment until task a7ab152c-e4cf-45ad-ab69-558660d5c366 (frr) has been started and output is visible here. 2026-01-28 00:47:58.302246 | orchestrator | 2026-01-28 00:47:58.302460 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-01-28 00:47:58.302482 | orchestrator | 2026-01-28 00:47:58.302494 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-01-28 00:47:58.302505 | orchestrator | Wednesday 28 January 2026 00:47:34 +0000 (0:00:00.239) 0:00:00.239 ***** 2026-01-28 00:47:58.302517 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-01-28 00:47:58.302528 | orchestrator | 2026-01-28 00:47:58.302539 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-01-28 00:47:58.302550 | orchestrator | Wednesday 28 January 2026 00:47:34 +0000 (0:00:00.228) 0:00:00.468 ***** 2026-01-28 00:47:58.302561 | orchestrator | changed: [testbed-manager] 2026-01-28 00:47:58.302572 | orchestrator | 2026-01-28 00:47:58.302583 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-01-28 00:47:58.302594 | orchestrator | Wednesday 28 January 2026 00:47:35 +0000 (0:00:01.237) 0:00:01.706 ***** 2026-01-28 00:47:58.302619 | orchestrator | changed: [testbed-manager] 2026-01-28 00:47:58.302630 | orchestrator | 2026-01-28 00:47:58.302641 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-01-28 00:47:58.302652 | orchestrator | Wednesday 28 January 2026 00:47:47 +0000 (0:00:11.411) 0:00:13.117 ***** 2026-01-28 00:47:58.302662 | orchestrator | ok: [testbed-manager] 2026-01-28 00:47:58.302673 | orchestrator | 2026-01-28 00:47:58.302684 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-01-28 00:47:58.302695 | orchestrator | Wednesday 28 January 2026 00:47:48 +0000 (0:00:01.127) 0:00:14.245 ***** 2026-01-28 00:47:58.302706 | orchestrator | changed: [testbed-manager] 2026-01-28 00:47:58.302716 | orchestrator | 2026-01-28 00:47:58.302727 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-01-28 00:47:58.302738 | orchestrator | Wednesday 28 January 2026 00:47:49 +0000 (0:00:01.104) 0:00:15.350 ***** 2026-01-28 00:47:58.302748 | orchestrator | ok: [testbed-manager] 2026-01-28 00:47:58.302759 | orchestrator | 2026-01-28 00:47:58.302772 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-01-28 00:47:58.302785 | orchestrator | Wednesday 28 January 2026 00:47:50 +0000 (0:00:01.263) 0:00:16.613 ***** 2026-01-28 00:47:58.302798 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:47:58.302811 | orchestrator | 2026-01-28 00:47:58.302824 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-01-28 00:47:58.302836 | orchestrator | Wednesday 28 January 2026 00:47:50 +0000 (0:00:00.170) 0:00:16.784 ***** 2026-01-28 00:47:58.302849 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:47:58.302882 | orchestrator | 2026-01-28 00:47:58.302895 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-01-28 00:47:58.302909 | orchestrator | Wednesday 28 January 2026 00:47:50 +0000 (0:00:00.155) 0:00:16.940 ***** 2026-01-28 00:47:58.302921 | orchestrator | changed: [testbed-manager] 2026-01-28 00:47:58.302933 | orchestrator | 2026-01-28 00:47:58.302945 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-01-28 00:47:58.302958 | orchestrator | Wednesday 28 January 2026 00:47:52 +0000 (0:00:01.126) 0:00:18.067 ***** 2026-01-28 00:47:58.302970 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-01-28 00:47:58.302983 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-01-28 00:47:58.302999 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-01-28 00:47:58.303086 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-01-28 00:47:58.303109 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-01-28 00:47:58.303150 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-01-28 00:47:58.303162 | orchestrator | 2026-01-28 00:47:58.303173 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-01-28 00:47:58.303184 | orchestrator | Wednesday 28 January 2026 00:47:54 +0000 (0:00:02.469) 0:00:20.537 ***** 2026-01-28 00:47:58.303206 | orchestrator | ok: [testbed-manager] 2026-01-28 00:47:58.303217 | orchestrator | 2026-01-28 00:47:58.303228 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-01-28 00:47:58.303239 | orchestrator | Wednesday 28 January 2026 00:47:56 +0000 (0:00:01.788) 0:00:22.325 ***** 2026-01-28 00:47:58.303250 | orchestrator | changed: [testbed-manager] 2026-01-28 00:47:58.303260 | orchestrator | 2026-01-28 00:47:58.303271 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:47:58.303283 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:47:58.303326 | orchestrator | 2026-01-28 00:47:58.303444 | orchestrator | 2026-01-28 00:47:58.303455 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:47:58.303466 | orchestrator | Wednesday 28 January 2026 00:47:57 +0000 (0:00:01.509) 0:00:23.835 ***** 2026-01-28 00:47:58.303477 | orchestrator | =============================================================================== 2026-01-28 00:47:58.303487 | orchestrator | osism.services.frr : Install frr package ------------------------------- 11.41s 2026-01-28 00:47:58.303498 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.47s 2026-01-28 00:47:58.303509 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.79s 2026-01-28 00:47:58.303519 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.51s 2026-01-28 00:47:58.303530 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.26s 2026-01-28 00:47:58.303559 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.24s 2026-01-28 00:47:58.303571 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.13s 2026-01-28 00:47:58.303582 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.13s 2026-01-28 00:47:58.303593 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.10s 2026-01-28 00:47:58.303603 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.23s 2026-01-28 00:47:58.303614 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.17s 2026-01-28 00:47:58.303625 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-01-28 00:47:58.702081 | orchestrator | 2026-01-28 00:47:58.704426 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Jan 28 00:47:58 UTC 2026 2026-01-28 00:47:58.704463 | orchestrator | 2026-01-28 00:48:00.751152 | orchestrator | 2026-01-28 00:48:00 | INFO  | Collection nutshell is prepared for execution 2026-01-28 00:48:00.751225 | orchestrator | 2026-01-28 00:48:00 | INFO  | A [0] - dotfiles 2026-01-28 00:48:10.783997 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [0] - homer 2026-01-28 00:48:10.784105 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [0] - netdata 2026-01-28 00:48:10.784121 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [0] - openstackclient 2026-01-28 00:48:10.784181 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [0] - phpmyadmin 2026-01-28 00:48:10.784193 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [0] - common 2026-01-28 00:48:10.789287 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [1] -- loadbalancer 2026-01-28 00:48:10.789357 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [2] --- opensearch 2026-01-28 00:48:10.789752 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [2] --- mariadb-ng 2026-01-28 00:48:10.790160 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [3] ---- horizon 2026-01-28 00:48:10.790408 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [3] ---- keystone 2026-01-28 00:48:10.790867 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [4] ----- neutron 2026-01-28 00:48:10.791260 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [5] ------ wait-for-nova 2026-01-28 00:48:10.791629 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [6] ------- octavia 2026-01-28 00:48:10.793515 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [4] ----- barbican 2026-01-28 00:48:10.793803 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [4] ----- designate 2026-01-28 00:48:10.794485 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [4] ----- ironic 2026-01-28 00:48:10.795952 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [4] ----- placement 2026-01-28 00:48:10.795981 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [4] ----- magnum 2026-01-28 00:48:10.795993 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [1] -- openvswitch 2026-01-28 00:48:10.796004 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [2] --- ovn 2026-01-28 00:48:10.796015 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [1] -- memcached 2026-01-28 00:48:10.796289 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [1] -- redis 2026-01-28 00:48:10.796497 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [1] -- rabbitmq-ng 2026-01-28 00:48:10.796627 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [0] - kubernetes 2026-01-28 00:48:10.799529 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [1] -- kubeconfig 2026-01-28 00:48:10.799570 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [1] -- copy-kubeconfig 2026-01-28 00:48:10.799904 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [0] - ceph 2026-01-28 00:48:10.802211 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [1] -- ceph-pools 2026-01-28 00:48:10.802254 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [2] --- copy-ceph-keys 2026-01-28 00:48:10.802474 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [3] ---- cephclient 2026-01-28 00:48:10.802498 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-01-28 00:48:10.802738 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [4] ----- wait-for-keystone 2026-01-28 00:48:10.802843 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [5] ------ kolla-ceph-rgw 2026-01-28 00:48:10.803192 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [5] ------ glance 2026-01-28 00:48:10.803215 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [5] ------ cinder 2026-01-28 00:48:10.803258 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [5] ------ nova 2026-01-28 00:48:10.803583 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [4] ----- prometheus 2026-01-28 00:48:10.803607 | orchestrator | 2026-01-28 00:48:10 | INFO  | A [5] ------ grafana 2026-01-28 00:48:11.067081 | orchestrator | 2026-01-28 00:48:11 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-01-28 00:48:11.067223 | orchestrator | 2026-01-28 00:48:11 | INFO  | Tasks are running in the background 2026-01-28 00:48:14.727714 | orchestrator | 2026-01-28 00:48:14 | INFO  | No task IDs specified, wait for all currently running tasks 2026-01-28 00:48:16.849160 | orchestrator | 2026-01-28 00:48:16 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:48:16.849386 | orchestrator | 2026-01-28 00:48:16 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:48:16.852629 | orchestrator | 2026-01-28 00:48:16 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:48:16.852675 | orchestrator | 2026-01-28 00:48:16 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:48:16.853394 | orchestrator | 2026-01-28 00:48:16 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:48:16.854234 | orchestrator | 2026-01-28 00:48:16 | INFO  | Task 34a01341-d2e3-4e2d-a3f3-91cbb1399c26 is in state STARTED 2026-01-28 00:48:16.854895 | orchestrator | 2026-01-28 00:48:16 | INFO  | Task 3335dd7c-aea0-463e-ba87-e7709619ebed is in state STARTED 2026-01-28 00:48:16.854930 | orchestrator | 2026-01-28 00:48:16 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:48:19.890726 | orchestrator | 2026-01-28 00:48:19 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:48:19.894218 | orchestrator | 2026-01-28 00:48:19 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:48:19.896741 | orchestrator | 2026-01-28 00:48:19 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:48:19.896806 | orchestrator | 2026-01-28 00:48:19 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:48:19.897708 | orchestrator | 2026-01-28 00:48:19 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:48:19.898277 | orchestrator | 2026-01-28 00:48:19 | INFO  | Task 34a01341-d2e3-4e2d-a3f3-91cbb1399c26 is in state STARTED 2026-01-28 00:48:19.899650 | orchestrator | 2026-01-28 00:48:19 | INFO  | Task 3335dd7c-aea0-463e-ba87-e7709619ebed is in state STARTED 2026-01-28 00:48:19.899692 | orchestrator | 2026-01-28 00:48:19 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:48:22.930374 | orchestrator | 2026-01-28 00:48:22 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:48:22.930611 | orchestrator | 2026-01-28 00:48:22 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:48:22.935903 | orchestrator | 2026-01-28 00:48:22 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:48:22.938711 | orchestrator | 2026-01-28 00:48:22 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:48:22.939838 | orchestrator | 2026-01-28 00:48:22 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:48:22.949554 | orchestrator | 2026-01-28 00:48:22 | INFO  | Task 34a01341-d2e3-4e2d-a3f3-91cbb1399c26 is in state STARTED 2026-01-28 00:48:22.949779 | orchestrator | 2026-01-28 00:48:22 | INFO  | Task 3335dd7c-aea0-463e-ba87-e7709619ebed is in state STARTED 2026-01-28 00:48:22.949871 | orchestrator | 2026-01-28 00:48:22 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:48:25.992653 | orchestrator | 2026-01-28 00:48:25 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:48:25.992755 | orchestrator | 2026-01-28 00:48:25 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:48:25.993360 | orchestrator | 2026-01-28 00:48:25 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:48:25.993720 | orchestrator | 2026-01-28 00:48:25 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:48:25.994289 | orchestrator | 2026-01-28 00:48:25 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:48:25.994952 | orchestrator | 2026-01-28 00:48:25 | INFO  | Task 34a01341-d2e3-4e2d-a3f3-91cbb1399c26 is in state STARTED 2026-01-28 00:48:25.996061 | orchestrator | 2026-01-28 00:48:25 | INFO  | Task 3335dd7c-aea0-463e-ba87-e7709619ebed is in state STARTED 2026-01-28 00:48:25.996107 | orchestrator | 2026-01-28 00:48:25 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:48:29.040627 | orchestrator | 2026-01-28 00:48:29 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:48:29.041355 | orchestrator | 2026-01-28 00:48:29 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:48:29.041817 | orchestrator | 2026-01-28 00:48:29 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:48:29.042414 | orchestrator | 2026-01-28 00:48:29 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:48:29.044160 | orchestrator | 2026-01-28 00:48:29 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:48:29.044977 | orchestrator | 2026-01-28 00:48:29 | INFO  | Task 34a01341-d2e3-4e2d-a3f3-91cbb1399c26 is in state STARTED 2026-01-28 00:48:29.046273 | orchestrator | 2026-01-28 00:48:29 | INFO  | Task 3335dd7c-aea0-463e-ba87-e7709619ebed is in state STARTED 2026-01-28 00:48:29.046296 | orchestrator | 2026-01-28 00:48:29 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:48:32.332389 | orchestrator | 2026-01-28 00:48:32 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:48:32.332509 | orchestrator | 2026-01-28 00:48:32 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:48:32.332525 | orchestrator | 2026-01-28 00:48:32 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:48:32.332537 | orchestrator | 2026-01-28 00:48:32 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:48:32.332548 | orchestrator | 2026-01-28 00:48:32 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:48:32.332559 | orchestrator | 2026-01-28 00:48:32 | INFO  | Task 34a01341-d2e3-4e2d-a3f3-91cbb1399c26 is in state STARTED 2026-01-28 00:48:32.332570 | orchestrator | 2026-01-28 00:48:32 | INFO  | Task 3335dd7c-aea0-463e-ba87-e7709619ebed is in state STARTED 2026-01-28 00:48:32.332581 | orchestrator | 2026-01-28 00:48:32 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:48:35.390982 | orchestrator | 2026-01-28 00:48:35 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:48:35.391117 | orchestrator | 2026-01-28 00:48:35 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:48:35.391365 | orchestrator | 2026-01-28 00:48:35 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:48:35.391437 | orchestrator | 2026-01-28 00:48:35 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:48:35.391475 | orchestrator | 2026-01-28 00:48:35 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:48:35.429270 | orchestrator | 2026-01-28 00:48:35 | INFO  | Task 34a01341-d2e3-4e2d-a3f3-91cbb1399c26 is in state STARTED 2026-01-28 00:48:35.429354 | orchestrator | 2026-01-28 00:48:35 | INFO  | Task 3335dd7c-aea0-463e-ba87-e7709619ebed is in state STARTED 2026-01-28 00:48:35.429367 | orchestrator | 2026-01-28 00:48:35 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:48:38.475834 | orchestrator | 2026-01-28 00:48:38 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:48:38.475910 | orchestrator | 2026-01-28 00:48:38 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:48:38.475916 | orchestrator | 2026-01-28 00:48:38 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:48:38.475921 | orchestrator | 2026-01-28 00:48:38 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:48:38.475931 | orchestrator | 2026-01-28 00:48:38 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:48:38.475935 | orchestrator | 2026-01-28 00:48:38 | INFO  | Task 34a01341-d2e3-4e2d-a3f3-91cbb1399c26 is in state STARTED 2026-01-28 00:48:38.475939 | orchestrator | 2026-01-28 00:48:38 | INFO  | Task 3335dd7c-aea0-463e-ba87-e7709619ebed is in state STARTED 2026-01-28 00:48:38.475943 | orchestrator | 2026-01-28 00:48:38 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:48:41.658732 | orchestrator | 2026-01-28 00:48:41.658840 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-01-28 00:48:41.658857 | orchestrator | 2026-01-28 00:48:41.658870 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-01-28 00:48:41.658882 | orchestrator | Wednesday 28 January 2026 00:48:24 +0000 (0:00:00.840) 0:00:00.840 ***** 2026-01-28 00:48:41.658894 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:48:41.658907 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:48:41.658919 | orchestrator | changed: [testbed-manager] 2026-01-28 00:48:41.658930 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:48:41.658942 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:48:41.658953 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:48:41.658965 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:48:41.658976 | orchestrator | 2026-01-28 00:48:41.658988 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-01-28 00:48:41.659000 | orchestrator | Wednesday 28 January 2026 00:48:29 +0000 (0:00:04.934) 0:00:05.775 ***** 2026-01-28 00:48:41.659012 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-28 00:48:41.659040 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-28 00:48:41.659052 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-28 00:48:41.659064 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-28 00:48:41.659075 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-28 00:48:41.659087 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-28 00:48:41.659100 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-28 00:48:41.659118 | orchestrator | 2026-01-28 00:48:41.659160 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-01-28 00:48:41.659173 | orchestrator | Wednesday 28 January 2026 00:48:31 +0000 (0:00:02.305) 0:00:08.080 ***** 2026-01-28 00:48:41.659197 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-28 00:48:30.193663', 'end': '2026-01-28 00:48:30.198723', 'delta': '0:00:00.005060', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-28 00:48:41.659244 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-28 00:48:30.711903', 'end': '2026-01-28 00:48:30.717530', 'delta': '0:00:00.005627', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-28 00:48:41.659266 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-28 00:48:31.027949', 'end': '2026-01-28 00:48:31.035172', 'delta': '0:00:00.007223', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-28 00:48:41.659322 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-28 00:48:31.292769', 'end': '2026-01-28 00:48:31.298731', 'delta': '0:00:00.005962', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-28 00:48:41.659338 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-28 00:48:31.662572', 'end': '2026-01-28 00:48:31.667842', 'delta': '0:00:00.005270', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-28 00:48:41.659356 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-28 00:48:30.198067', 'end': '2026-01-28 00:48:30.719236', 'delta': '0:00:00.521169', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-28 00:48:41.659390 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-01-28 00:48:31.837764', 'end': '2026-01-28 00:48:31.843276', 'delta': '0:00:00.005512', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-01-28 00:48:41.659410 | orchestrator | 2026-01-28 00:48:41.659424 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-01-28 00:48:41.659437 | orchestrator | Wednesday 28 January 2026 00:48:33 +0000 (0:00:02.009) 0:00:10.090 ***** 2026-01-28 00:48:41.659450 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-01-28 00:48:41.659463 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-01-28 00:48:41.659475 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-01-28 00:48:41.659487 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-01-28 00:48:41.659502 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-01-28 00:48:41.659519 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-01-28 00:48:41.659536 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-01-28 00:48:41.659552 | orchestrator | 2026-01-28 00:48:41.659565 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-01-28 00:48:41.659578 | orchestrator | Wednesday 28 January 2026 00:48:36 +0000 (0:00:02.879) 0:00:12.969 ***** 2026-01-28 00:48:41.659591 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-01-28 00:48:41.659603 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-01-28 00:48:41.659613 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-01-28 00:48:41.659624 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-01-28 00:48:41.659635 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-01-28 00:48:41.659646 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-01-28 00:48:41.659657 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-01-28 00:48:41.659667 | orchestrator | 2026-01-28 00:48:41.659678 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:48:41.659697 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:48:41.659710 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:48:41.659721 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:48:41.659732 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:48:41.659751 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:48:41.659762 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:48:41.659773 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:48:41.659784 | orchestrator | 2026-01-28 00:48:41.659794 | orchestrator | 2026-01-28 00:48:41.659805 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:48:41.659816 | orchestrator | Wednesday 28 January 2026 00:48:38 +0000 (0:00:02.091) 0:00:15.061 ***** 2026-01-28 00:48:41.659827 | orchestrator | =============================================================================== 2026-01-28 00:48:41.659838 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.93s 2026-01-28 00:48:41.659854 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.88s 2026-01-28 00:48:41.659874 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.31s 2026-01-28 00:48:41.659885 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.09s 2026-01-28 00:48:41.659896 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.01s 2026-01-28 00:48:41.659907 | orchestrator | 2026-01-28 00:48:41 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:48:41.659918 | orchestrator | 2026-01-28 00:48:41 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:48:41.659928 | orchestrator | 2026-01-28 00:48:41 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:48:41.659939 | orchestrator | 2026-01-28 00:48:41 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:48:41.659950 | orchestrator | 2026-01-28 00:48:41 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:48:41.659960 | orchestrator | 2026-01-28 00:48:41 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:48:41.659971 | orchestrator | 2026-01-28 00:48:41 | INFO  | Task 34a01341-d2e3-4e2d-a3f3-91cbb1399c26 is in state SUCCESS 2026-01-28 00:48:41.659981 | orchestrator | 2026-01-28 00:48:41 | INFO  | Task 3335dd7c-aea0-463e-ba87-e7709619ebed is in state STARTED 2026-01-28 00:48:41.659992 | orchestrator | 2026-01-28 00:48:41 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:48:44.764812 | orchestrator | 2026-01-28 00:48:44 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:48:44.764884 | orchestrator | 2026-01-28 00:48:44 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:48:44.764890 | orchestrator | 2026-01-28 00:48:44 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:48:44.764895 | orchestrator | 2026-01-28 00:48:44 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:48:44.764900 | orchestrator | 2026-01-28 00:48:44 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:48:44.764904 | orchestrator | 2026-01-28 00:48:44 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:48:44.764909 | orchestrator | 2026-01-28 00:48:44 | INFO  | Task 3335dd7c-aea0-463e-ba87-e7709619ebed is in state STARTED 2026-01-28 00:48:44.764914 | orchestrator | 2026-01-28 00:48:44 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:48:47.776953 | orchestrator | 2026-01-28 00:48:47 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:48:47.778981 | orchestrator | 2026-01-28 00:48:47 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:48:47.781064 | orchestrator | 2026-01-28 00:48:47 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:48:47.784695 | orchestrator | 2026-01-28 00:48:47 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:48:47.786632 | orchestrator | 2026-01-28 00:48:47 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:48:47.786667 | orchestrator | 2026-01-28 00:48:47 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:48:47.795761 | orchestrator | 2026-01-28 00:48:47 | INFO  | Task 3335dd7c-aea0-463e-ba87-e7709619ebed is in state STARTED 2026-01-28 00:48:47.795822 | orchestrator | 2026-01-28 00:48:47 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:48:50.839459 | orchestrator | 2026-01-28 00:48:50 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:48:50.839556 | orchestrator | 2026-01-28 00:48:50 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:48:50.839570 | orchestrator | 2026-01-28 00:48:50 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:48:50.839581 | orchestrator | 2026-01-28 00:48:50 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:48:50.842673 | orchestrator | 2026-01-28 00:48:50 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:48:50.842724 | orchestrator | 2026-01-28 00:48:50 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:48:50.842745 | orchestrator | 2026-01-28 00:48:50 | INFO  | Task 3335dd7c-aea0-463e-ba87-e7709619ebed is in state STARTED 2026-01-28 00:48:50.842765 | orchestrator | 2026-01-28 00:48:50 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:48:54.111945 | orchestrator | 2026-01-28 00:48:54 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:48:54.112016 | orchestrator | 2026-01-28 00:48:54 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:48:54.112023 | orchestrator | 2026-01-28 00:48:54 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:48:54.112029 | orchestrator | 2026-01-28 00:48:54 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:48:54.112035 | orchestrator | 2026-01-28 00:48:54 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:48:54.112041 | orchestrator | 2026-01-28 00:48:54 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:48:54.112292 | orchestrator | 2026-01-28 00:48:54 | INFO  | Task 3335dd7c-aea0-463e-ba87-e7709619ebed is in state STARTED 2026-01-28 00:48:54.112307 | orchestrator | 2026-01-28 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:48:57.150475 | orchestrator | 2026-01-28 00:48:57 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:48:57.152715 | orchestrator | 2026-01-28 00:48:57 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:48:57.152862 | orchestrator | 2026-01-28 00:48:57 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:48:57.153658 | orchestrator | 2026-01-28 00:48:57 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:48:57.154114 | orchestrator | 2026-01-28 00:48:57 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:48:57.155920 | orchestrator | 2026-01-28 00:48:57 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:48:57.155948 | orchestrator | 2026-01-28 00:48:57 | INFO  | Task 3335dd7c-aea0-463e-ba87-e7709619ebed is in state STARTED 2026-01-28 00:48:57.155956 | orchestrator | 2026-01-28 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:00.258960 | orchestrator | 2026-01-28 00:49:00 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:00.260218 | orchestrator | 2026-01-28 00:49:00 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:49:00.262133 | orchestrator | 2026-01-28 00:49:00 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:00.263221 | orchestrator | 2026-01-28 00:49:00 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:00.264165 | orchestrator | 2026-01-28 00:49:00 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:00.266646 | orchestrator | 2026-01-28 00:49:00 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:49:00.268414 | orchestrator | 2026-01-28 00:49:00 | INFO  | Task 3335dd7c-aea0-463e-ba87-e7709619ebed is in state STARTED 2026-01-28 00:49:00.269265 | orchestrator | 2026-01-28 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:03.311475 | orchestrator | 2026-01-28 00:49:03 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:03.311582 | orchestrator | 2026-01-28 00:49:03 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:49:03.311597 | orchestrator | 2026-01-28 00:49:03 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:03.312048 | orchestrator | 2026-01-28 00:49:03 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:03.356389 | orchestrator | 2026-01-28 00:49:03 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:03.356486 | orchestrator | 2026-01-28 00:49:03 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:49:03.356503 | orchestrator | 2026-01-28 00:49:03 | INFO  | Task 3335dd7c-aea0-463e-ba87-e7709619ebed is in state STARTED 2026-01-28 00:49:03.356521 | orchestrator | 2026-01-28 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:06.413834 | orchestrator | 2026-01-28 00:49:06 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:06.413970 | orchestrator | 2026-01-28 00:49:06 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:49:06.413998 | orchestrator | 2026-01-28 00:49:06 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:06.414099 | orchestrator | 2026-01-28 00:49:06 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:06.414115 | orchestrator | 2026-01-28 00:49:06 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:06.414164 | orchestrator | 2026-01-28 00:49:06 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:49:06.414176 | orchestrator | 2026-01-28 00:49:06 | INFO  | Task 3335dd7c-aea0-463e-ba87-e7709619ebed is in state SUCCESS 2026-01-28 00:49:06.414195 | orchestrator | 2026-01-28 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:09.414755 | orchestrator | 2026-01-28 00:49:09 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:09.415735 | orchestrator | 2026-01-28 00:49:09 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:49:09.416252 | orchestrator | 2026-01-28 00:49:09 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:09.416924 | orchestrator | 2026-01-28 00:49:09 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:09.418210 | orchestrator | 2026-01-28 00:49:09 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:09.418762 | orchestrator | 2026-01-28 00:49:09 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:49:09.418800 | orchestrator | 2026-01-28 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:12.538691 | orchestrator | 2026-01-28 00:49:12 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:12.539535 | orchestrator | 2026-01-28 00:49:12 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:49:12.540208 | orchestrator | 2026-01-28 00:49:12 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:12.541664 | orchestrator | 2026-01-28 00:49:12 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:12.547353 | orchestrator | 2026-01-28 00:49:12 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:12.547404 | orchestrator | 2026-01-28 00:49:12 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:49:12.547416 | orchestrator | 2026-01-28 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:15.581261 | orchestrator | 2026-01-28 00:49:15 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:15.601715 | orchestrator | 2026-01-28 00:49:15 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:49:15.604822 | orchestrator | 2026-01-28 00:49:15 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:15.605965 | orchestrator | 2026-01-28 00:49:15 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:15.622419 | orchestrator | 2026-01-28 00:49:15 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:15.622501 | orchestrator | 2026-01-28 00:49:15 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state STARTED 2026-01-28 00:49:15.622606 | orchestrator | 2026-01-28 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:18.668191 | orchestrator | 2026-01-28 00:49:18 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:18.673500 | orchestrator | 2026-01-28 00:49:18 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:49:18.677728 | orchestrator | 2026-01-28 00:49:18 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:18.678684 | orchestrator | 2026-01-28 00:49:18 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:18.679521 | orchestrator | 2026-01-28 00:49:18 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:18.680518 | orchestrator | 2026-01-28 00:49:18 | INFO  | Task 9162426a-1798-460c-800e-a2b0f7e2b34c is in state SUCCESS 2026-01-28 00:49:18.680586 | orchestrator | 2026-01-28 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:21.740647 | orchestrator | 2026-01-28 00:49:21 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:21.743232 | orchestrator | 2026-01-28 00:49:21 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:49:21.743303 | orchestrator | 2026-01-28 00:49:21 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:21.744412 | orchestrator | 2026-01-28 00:49:21 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:21.747761 | orchestrator | 2026-01-28 00:49:21 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:21.747792 | orchestrator | 2026-01-28 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:24.813310 | orchestrator | 2026-01-28 00:49:24 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:24.817729 | orchestrator | 2026-01-28 00:49:24 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:49:24.822235 | orchestrator | 2026-01-28 00:49:24 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:24.825320 | orchestrator | 2026-01-28 00:49:24 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:24.827299 | orchestrator | 2026-01-28 00:49:24 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:24.827343 | orchestrator | 2026-01-28 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:27.858932 | orchestrator | 2026-01-28 00:49:27 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:27.861285 | orchestrator | 2026-01-28 00:49:27 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:49:27.862928 | orchestrator | 2026-01-28 00:49:27 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:27.864808 | orchestrator | 2026-01-28 00:49:27 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:27.869842 | orchestrator | 2026-01-28 00:49:27 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:27.870246 | orchestrator | 2026-01-28 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:31.153329 | orchestrator | 2026-01-28 00:49:30 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:31.153412 | orchestrator | 2026-01-28 00:49:30 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:49:31.153424 | orchestrator | 2026-01-28 00:49:30 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:31.153433 | orchestrator | 2026-01-28 00:49:30 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:31.153442 | orchestrator | 2026-01-28 00:49:30 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:31.153451 | orchestrator | 2026-01-28 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:34.035091 | orchestrator | 2026-01-28 00:49:34 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:34.036860 | orchestrator | 2026-01-28 00:49:34 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:49:34.038780 | orchestrator | 2026-01-28 00:49:34 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:34.039884 | orchestrator | 2026-01-28 00:49:34 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:34.043638 | orchestrator | 2026-01-28 00:49:34 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:34.043687 | orchestrator | 2026-01-28 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:37.121430 | orchestrator | 2026-01-28 00:49:37 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:37.128497 | orchestrator | 2026-01-28 00:49:37 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:49:37.130676 | orchestrator | 2026-01-28 00:49:37 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:37.133318 | orchestrator | 2026-01-28 00:49:37 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:37.134569 | orchestrator | 2026-01-28 00:49:37 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:37.134601 | orchestrator | 2026-01-28 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:40.191829 | orchestrator | 2026-01-28 00:49:40 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:40.191930 | orchestrator | 2026-01-28 00:49:40 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:49:40.192086 | orchestrator | 2026-01-28 00:49:40 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:40.193554 | orchestrator | 2026-01-28 00:49:40 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:40.193906 | orchestrator | 2026-01-28 00:49:40 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:40.193935 | orchestrator | 2026-01-28 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:43.233104 | orchestrator | 2026-01-28 00:49:43 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:43.233331 | orchestrator | 2026-01-28 00:49:43 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:49:43.234196 | orchestrator | 2026-01-28 00:49:43 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:43.234982 | orchestrator | 2026-01-28 00:49:43 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:43.235716 | orchestrator | 2026-01-28 00:49:43 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:43.235738 | orchestrator | 2026-01-28 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:46.336632 | orchestrator | 2026-01-28 00:49:46 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:46.342594 | orchestrator | 2026-01-28 00:49:46 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:49:46.345805 | orchestrator | 2026-01-28 00:49:46 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:46.350775 | orchestrator | 2026-01-28 00:49:46 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:46.353514 | orchestrator | 2026-01-28 00:49:46 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:46.353585 | orchestrator | 2026-01-28 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:49.393734 | orchestrator | 2026-01-28 00:49:49 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:49.394544 | orchestrator | 2026-01-28 00:49:49 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:49:49.394582 | orchestrator | 2026-01-28 00:49:49 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:49.395717 | orchestrator | 2026-01-28 00:49:49 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:49.396613 | orchestrator | 2026-01-28 00:49:49 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:49.396636 | orchestrator | 2026-01-28 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:52.438370 | orchestrator | 2026-01-28 00:49:52 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:52.440973 | orchestrator | 2026-01-28 00:49:52 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state STARTED 2026-01-28 00:49:52.442244 | orchestrator | 2026-01-28 00:49:52 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:52.444270 | orchestrator | 2026-01-28 00:49:52 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:52.446598 | orchestrator | 2026-01-28 00:49:52 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:52.449290 | orchestrator | 2026-01-28 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:55.540247 | orchestrator | 2026-01-28 00:49:55 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:55.542909 | orchestrator | 2026-01-28 00:49:55.542976 | orchestrator | 2026-01-28 00:49:55.542991 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-01-28 00:49:55.543003 | orchestrator | 2026-01-28 00:49:55.543014 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-01-28 00:49:55.543027 | orchestrator | Wednesday 28 January 2026 00:48:26 +0000 (0:00:01.100) 0:00:01.100 ***** 2026-01-28 00:49:55.543038 | orchestrator | ok: [testbed-manager] => { 2026-01-28 00:49:55.543051 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-01-28 00:49:55.543064 | orchestrator | } 2026-01-28 00:49:55.543075 | orchestrator | 2026-01-28 00:49:55.543086 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-01-28 00:49:55.543097 | orchestrator | Wednesday 28 January 2026 00:48:26 +0000 (0:00:00.233) 0:00:01.333 ***** 2026-01-28 00:49:55.543144 | orchestrator | ok: [testbed-manager] 2026-01-28 00:49:55.543166 | orchestrator | 2026-01-28 00:49:55.543186 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-01-28 00:49:55.543205 | orchestrator | Wednesday 28 January 2026 00:48:27 +0000 (0:00:01.416) 0:00:02.750 ***** 2026-01-28 00:49:55.543223 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-01-28 00:49:55.543234 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-01-28 00:49:55.543245 | orchestrator | 2026-01-28 00:49:55.543256 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-01-28 00:49:55.543267 | orchestrator | Wednesday 28 January 2026 00:48:29 +0000 (0:00:01.800) 0:00:04.551 ***** 2026-01-28 00:49:55.543278 | orchestrator | changed: [testbed-manager] 2026-01-28 00:49:55.543288 | orchestrator | 2026-01-28 00:49:55.543299 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-01-28 00:49:55.543310 | orchestrator | Wednesday 28 January 2026 00:48:33 +0000 (0:00:04.318) 0:00:08.869 ***** 2026-01-28 00:49:55.543321 | orchestrator | changed: [testbed-manager] 2026-01-28 00:49:55.543332 | orchestrator | 2026-01-28 00:49:55.543342 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-01-28 00:49:55.543353 | orchestrator | Wednesday 28 January 2026 00:48:35 +0000 (0:00:01.854) 0:00:10.724 ***** 2026-01-28 00:49:55.543365 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-01-28 00:49:55.543376 | orchestrator | ok: [testbed-manager] 2026-01-28 00:49:55.543386 | orchestrator | 2026-01-28 00:49:55.543397 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-01-28 00:49:55.543431 | orchestrator | Wednesday 28 January 2026 00:49:03 +0000 (0:00:27.859) 0:00:38.583 ***** 2026-01-28 00:49:55.543442 | orchestrator | changed: [testbed-manager] 2026-01-28 00:49:55.543452 | orchestrator | 2026-01-28 00:49:55.543465 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:49:55.543483 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:49:55.543503 | orchestrator | 2026-01-28 00:49:55.543521 | orchestrator | 2026-01-28 00:49:55.543540 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:49:55.543557 | orchestrator | Wednesday 28 January 2026 00:49:05 +0000 (0:00:02.168) 0:00:40.752 ***** 2026-01-28 00:49:55.543576 | orchestrator | =============================================================================== 2026-01-28 00:49:55.543593 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.86s 2026-01-28 00:49:55.543612 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.32s 2026-01-28 00:49:55.543628 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.17s 2026-01-28 00:49:55.543647 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.85s 2026-01-28 00:49:55.543665 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.80s 2026-01-28 00:49:55.543684 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.42s 2026-01-28 00:49:55.543702 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.23s 2026-01-28 00:49:55.543720 | orchestrator | 2026-01-28 00:49:55.543738 | orchestrator | 2026-01-28 00:49:55.543756 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-01-28 00:49:55.543775 | orchestrator | 2026-01-28 00:49:55.543795 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-01-28 00:49:55.543815 | orchestrator | Wednesday 28 January 2026 00:48:26 +0000 (0:00:00.928) 0:00:00.928 ***** 2026-01-28 00:49:55.543834 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-01-28 00:49:55.543847 | orchestrator | 2026-01-28 00:49:55.543865 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-01-28 00:49:55.543876 | orchestrator | Wednesday 28 January 2026 00:48:27 +0000 (0:00:00.299) 0:00:01.228 ***** 2026-01-28 00:49:55.543887 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-01-28 00:49:55.543898 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-01-28 00:49:55.543908 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-01-28 00:49:55.543919 | orchestrator | 2026-01-28 00:49:55.543930 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-01-28 00:49:55.543940 | orchestrator | Wednesday 28 January 2026 00:48:29 +0000 (0:00:02.563) 0:00:03.791 ***** 2026-01-28 00:49:55.543951 | orchestrator | changed: [testbed-manager] 2026-01-28 00:49:55.543961 | orchestrator | 2026-01-28 00:49:55.543973 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-01-28 00:49:55.543984 | orchestrator | Wednesday 28 January 2026 00:48:33 +0000 (0:00:03.954) 0:00:07.746 ***** 2026-01-28 00:49:55.544012 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-01-28 00:49:55.544023 | orchestrator | ok: [testbed-manager] 2026-01-28 00:49:55.544034 | orchestrator | 2026-01-28 00:49:55.544045 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-01-28 00:49:55.544055 | orchestrator | Wednesday 28 January 2026 00:49:09 +0000 (0:00:36.232) 0:00:43.978 ***** 2026-01-28 00:49:55.544066 | orchestrator | changed: [testbed-manager] 2026-01-28 00:49:55.544076 | orchestrator | 2026-01-28 00:49:55.544087 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-01-28 00:49:55.544180 | orchestrator | Wednesday 28 January 2026 00:49:10 +0000 (0:00:01.089) 0:00:45.068 ***** 2026-01-28 00:49:55.544194 | orchestrator | ok: [testbed-manager] 2026-01-28 00:49:55.544205 | orchestrator | 2026-01-28 00:49:55.544216 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-01-28 00:49:55.544227 | orchestrator | Wednesday 28 January 2026 00:49:12 +0000 (0:00:01.013) 0:00:46.081 ***** 2026-01-28 00:49:55.544237 | orchestrator | changed: [testbed-manager] 2026-01-28 00:49:55.544248 | orchestrator | 2026-01-28 00:49:55.544258 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-01-28 00:49:55.544269 | orchestrator | Wednesday 28 January 2026 00:49:14 +0000 (0:00:02.020) 0:00:48.102 ***** 2026-01-28 00:49:55.544279 | orchestrator | changed: [testbed-manager] 2026-01-28 00:49:55.544290 | orchestrator | 2026-01-28 00:49:55.544300 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-01-28 00:49:55.544311 | orchestrator | Wednesday 28 January 2026 00:49:15 +0000 (0:00:01.016) 0:00:49.118 ***** 2026-01-28 00:49:55.544322 | orchestrator | changed: [testbed-manager] 2026-01-28 00:49:55.544332 | orchestrator | 2026-01-28 00:49:55.544343 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-01-28 00:49:55.544353 | orchestrator | Wednesday 28 January 2026 00:49:15 +0000 (0:00:00.601) 0:00:49.719 ***** 2026-01-28 00:49:55.544363 | orchestrator | ok: [testbed-manager] 2026-01-28 00:49:55.544374 | orchestrator | 2026-01-28 00:49:55.544384 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:49:55.544395 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:49:55.544406 | orchestrator | 2026-01-28 00:49:55.544417 | orchestrator | 2026-01-28 00:49:55.544427 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:49:55.544438 | orchestrator | Wednesday 28 January 2026 00:49:16 +0000 (0:00:00.474) 0:00:50.194 ***** 2026-01-28 00:49:55.544449 | orchestrator | =============================================================================== 2026-01-28 00:49:55.544459 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.23s 2026-01-28 00:49:55.544470 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.95s 2026-01-28 00:49:55.544480 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.56s 2026-01-28 00:49:55.544491 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.02s 2026-01-28 00:49:55.544501 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.09s 2026-01-28 00:49:55.544512 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.02s 2026-01-28 00:49:55.544522 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.01s 2026-01-28 00:49:55.544533 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.60s 2026-01-28 00:49:55.544543 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.47s 2026-01-28 00:49:55.544554 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.30s 2026-01-28 00:49:55.544564 | orchestrator | 2026-01-28 00:49:55.544575 | orchestrator | 2026-01-28 00:49:55.544586 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 00:49:55.544596 | orchestrator | 2026-01-28 00:49:55.544607 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 00:49:55.544617 | orchestrator | Wednesday 28 January 2026 00:48:23 +0000 (0:00:00.870) 0:00:00.870 ***** 2026-01-28 00:49:55.544628 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-01-28 00:49:55.544638 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-01-28 00:49:55.544649 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-01-28 00:49:55.544659 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-01-28 00:49:55.544669 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-01-28 00:49:55.544686 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-01-28 00:49:55.544697 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-01-28 00:49:55.544707 | orchestrator | 2026-01-28 00:49:55.544718 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-01-28 00:49:55.544729 | orchestrator | 2026-01-28 00:49:55.544739 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-01-28 00:49:55.544750 | orchestrator | Wednesday 28 January 2026 00:48:27 +0000 (0:00:03.868) 0:00:04.739 ***** 2026-01-28 00:49:55.544775 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:49:55.544789 | orchestrator | 2026-01-28 00:49:55.544800 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-01-28 00:49:55.544811 | orchestrator | Wednesday 28 January 2026 00:48:28 +0000 (0:00:01.018) 0:00:05.758 ***** 2026-01-28 00:49:55.544821 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:49:55.544832 | orchestrator | ok: [testbed-manager] 2026-01-28 00:49:55.544843 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:49:55.544853 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:49:55.544864 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:49:55.544881 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:49:55.544892 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:49:55.544902 | orchestrator | 2026-01-28 00:49:55.544913 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-01-28 00:49:55.544924 | orchestrator | Wednesday 28 January 2026 00:48:30 +0000 (0:00:01.437) 0:00:07.196 ***** 2026-01-28 00:49:55.544935 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:49:55.544945 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:49:55.544955 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:49:55.544971 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:49:55.544989 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:49:55.545007 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:49:55.545025 | orchestrator | ok: [testbed-manager] 2026-01-28 00:49:55.545042 | orchestrator | 2026-01-28 00:49:55.545060 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-01-28 00:49:55.545078 | orchestrator | Wednesday 28 January 2026 00:48:33 +0000 (0:00:03.073) 0:00:10.270 ***** 2026-01-28 00:49:55.545096 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:49:55.545178 | orchestrator | changed: [testbed-manager] 2026-01-28 00:49:55.545199 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:49:55.545216 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:49:55.545233 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:49:55.545253 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:49:55.545271 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:49:55.545289 | orchestrator | 2026-01-28 00:49:55.545307 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-01-28 00:49:55.545326 | orchestrator | Wednesday 28 January 2026 00:48:36 +0000 (0:00:02.917) 0:00:13.187 ***** 2026-01-28 00:49:55.545345 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:49:55.545362 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:49:55.545381 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:49:55.545400 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:49:55.545418 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:49:55.545432 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:49:55.545443 | orchestrator | changed: [testbed-manager] 2026-01-28 00:49:55.545453 | orchestrator | 2026-01-28 00:49:55.545464 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-01-28 00:49:55.545475 | orchestrator | Wednesday 28 January 2026 00:48:51 +0000 (0:00:14.828) 0:00:28.016 ***** 2026-01-28 00:49:55.545485 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:49:55.545496 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:49:55.545517 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:49:55.545527 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:49:55.545538 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:49:55.545583 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:49:55.545595 | orchestrator | changed: [testbed-manager] 2026-01-28 00:49:55.545606 | orchestrator | 2026-01-28 00:49:55.545624 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-01-28 00:49:55.545641 | orchestrator | Wednesday 28 January 2026 00:49:31 +0000 (0:00:40.834) 0:01:08.851 ***** 2026-01-28 00:49:55.545660 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:49:55.545680 | orchestrator | 2026-01-28 00:49:55.545697 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-01-28 00:49:55.545714 | orchestrator | Wednesday 28 January 2026 00:49:33 +0000 (0:00:01.899) 0:01:10.750 ***** 2026-01-28 00:49:55.545732 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-01-28 00:49:55.545750 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-01-28 00:49:55.545768 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-01-28 00:49:55.545785 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-01-28 00:49:55.545803 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-01-28 00:49:55.545820 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-01-28 00:49:55.545837 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-01-28 00:49:55.545855 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-01-28 00:49:55.545872 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-01-28 00:49:55.545890 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-01-28 00:49:55.545906 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-01-28 00:49:55.545923 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-01-28 00:49:55.545940 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-01-28 00:49:55.545958 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-01-28 00:49:55.545976 | orchestrator | 2026-01-28 00:49:55.545993 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-01-28 00:49:55.546012 | orchestrator | Wednesday 28 January 2026 00:49:39 +0000 (0:00:05.255) 0:01:16.006 ***** 2026-01-28 00:49:55.546132 | orchestrator | ok: [testbed-manager] 2026-01-28 00:49:55.546162 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:49:55.546181 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:49:55.546199 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:49:55.546217 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:49:55.546234 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:49:55.546253 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:49:55.546273 | orchestrator | 2026-01-28 00:49:55.546291 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-01-28 00:49:55.546310 | orchestrator | Wednesday 28 January 2026 00:49:40 +0000 (0:00:01.051) 0:01:17.057 ***** 2026-01-28 00:49:55.546328 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:49:55.546348 | orchestrator | changed: [testbed-manager] 2026-01-28 00:49:55.546367 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:49:55.546387 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:49:55.546405 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:49:55.546423 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:49:55.546440 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:49:55.546458 | orchestrator | 2026-01-28 00:49:55.546475 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-01-28 00:49:55.546512 | orchestrator | Wednesday 28 January 2026 00:49:41 +0000 (0:00:01.378) 0:01:18.436 ***** 2026-01-28 00:49:55.546532 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:49:55.546550 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:49:55.546584 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:49:55.546601 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:49:55.546617 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:49:55.546634 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:49:55.546650 | orchestrator | ok: [testbed-manager] 2026-01-28 00:49:55.546667 | orchestrator | 2026-01-28 00:49:55.546684 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-01-28 00:49:55.546701 | orchestrator | Wednesday 28 January 2026 00:49:43 +0000 (0:00:01.545) 0:01:19.981 ***** 2026-01-28 00:49:55.546717 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:49:55.546733 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:49:55.546750 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:49:55.546767 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:49:55.546783 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:49:55.546799 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:49:55.546817 | orchestrator | ok: [testbed-manager] 2026-01-28 00:49:55.546833 | orchestrator | 2026-01-28 00:49:55.546851 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-01-28 00:49:55.546868 | orchestrator | Wednesday 28 January 2026 00:49:45 +0000 (0:00:02.637) 0:01:22.619 ***** 2026-01-28 00:49:55.546886 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-01-28 00:49:55.546906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:49:55.546923 | orchestrator | 2026-01-28 00:49:55.546941 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-01-28 00:49:55.546958 | orchestrator | Wednesday 28 January 2026 00:49:47 +0000 (0:00:02.133) 0:01:24.753 ***** 2026-01-28 00:49:55.546976 | orchestrator | changed: [testbed-manager] 2026-01-28 00:49:55.546994 | orchestrator | 2026-01-28 00:49:55.547014 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-01-28 00:49:55.547241 | orchestrator | Wednesday 28 January 2026 00:49:50 +0000 (0:00:02.693) 0:01:27.446 ***** 2026-01-28 00:49:55.547267 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:49:55.547285 | orchestrator | changed: [testbed-manager] 2026-01-28 00:49:55.547303 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:49:55.547322 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:49:55.547341 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:49:55.547359 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:49:55.547378 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:49:55.547397 | orchestrator | 2026-01-28 00:49:55.547417 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:49:55.547437 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:49:55.547457 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:49:55.547475 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:49:55.547495 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:49:55.547513 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:49:55.547531 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:49:55.547550 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:49:55.547587 | orchestrator | 2026-01-28 00:49:55.547606 | orchestrator | 2026-01-28 00:49:55.547624 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:49:55.547643 | orchestrator | Wednesday 28 January 2026 00:49:53 +0000 (0:00:03.233) 0:01:30.679 ***** 2026-01-28 00:49:55.547663 | orchestrator | =============================================================================== 2026-01-28 00:49:55.547683 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 40.83s 2026-01-28 00:49:55.547702 | orchestrator | osism.services.netdata : Add repository -------------------------------- 14.83s 2026-01-28 00:49:55.547743 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.26s 2026-01-28 00:49:55.547765 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.87s 2026-01-28 00:49:55.547786 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.23s 2026-01-28 00:49:55.547807 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.07s 2026-01-28 00:49:55.547988 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.92s 2026-01-28 00:49:55.548005 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.69s 2026-01-28 00:49:55.548018 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.64s 2026-01-28 00:49:55.548030 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.13s 2026-01-28 00:49:55.548042 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.90s 2026-01-28 00:49:55.548069 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.55s 2026-01-28 00:49:55.548082 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.44s 2026-01-28 00:49:55.548095 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.38s 2026-01-28 00:49:55.548171 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.05s 2026-01-28 00:49:55.548189 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.02s 2026-01-28 00:49:55.548203 | orchestrator | 2026-01-28 00:49:55 | INFO  | Task e4137670-7289-448c-9db8-e4cde658f6bb is in state SUCCESS 2026-01-28 00:49:55.548216 | orchestrator | 2026-01-28 00:49:55 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:55.548227 | orchestrator | 2026-01-28 00:49:55 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:55.549572 | orchestrator | 2026-01-28 00:49:55 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:55.549683 | orchestrator | 2026-01-28 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:49:58.584742 | orchestrator | 2026-01-28 00:49:58 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:49:58.584843 | orchestrator | 2026-01-28 00:49:58 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:49:58.587541 | orchestrator | 2026-01-28 00:49:58 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:49:58.587573 | orchestrator | 2026-01-28 00:49:58 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:49:58.587585 | orchestrator | 2026-01-28 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:01.633904 | orchestrator | 2026-01-28 00:50:01 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:50:01.635854 | orchestrator | 2026-01-28 00:50:01 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:50:01.638414 | orchestrator | 2026-01-28 00:50:01 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:01.639279 | orchestrator | 2026-01-28 00:50:01 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:01.639338 | orchestrator | 2026-01-28 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:04.691351 | orchestrator | 2026-01-28 00:50:04 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:50:04.692320 | orchestrator | 2026-01-28 00:50:04 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state STARTED 2026-01-28 00:50:04.695846 | orchestrator | 2026-01-28 00:50:04 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:04.699538 | orchestrator | 2026-01-28 00:50:04 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:04.699610 | orchestrator | 2026-01-28 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:07.751632 | orchestrator | 2026-01-28 00:50:07 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:50:07.751746 | orchestrator | 2026-01-28 00:50:07 | INFO  | Task e4015124-8db1-40ae-9c1a-a4b3e4c057c8 is in state SUCCESS 2026-01-28 00:50:07.753321 | orchestrator | 2026-01-28 00:50:07 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:07.755447 | orchestrator | 2026-01-28 00:50:07 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:07.755925 | orchestrator | 2026-01-28 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:10.827301 | orchestrator | 2026-01-28 00:50:10 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:50:10.827419 | orchestrator | 2026-01-28 00:50:10 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:10.827442 | orchestrator | 2026-01-28 00:50:10 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:10.827461 | orchestrator | 2026-01-28 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:13.867151 | orchestrator | 2026-01-28 00:50:13 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:50:13.874784 | orchestrator | 2026-01-28 00:50:13 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:13.874858 | orchestrator | 2026-01-28 00:50:13 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:13.874882 | orchestrator | 2026-01-28 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:16.903220 | orchestrator | 2026-01-28 00:50:16 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:50:16.905624 | orchestrator | 2026-01-28 00:50:16 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:16.906171 | orchestrator | 2026-01-28 00:50:16 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:16.906208 | orchestrator | 2026-01-28 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:19.942080 | orchestrator | 2026-01-28 00:50:19 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:50:19.942254 | orchestrator | 2026-01-28 00:50:19 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:19.942272 | orchestrator | 2026-01-28 00:50:19 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:19.942286 | orchestrator | 2026-01-28 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:22.981024 | orchestrator | 2026-01-28 00:50:22 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:50:22.981218 | orchestrator | 2026-01-28 00:50:22 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:22.981247 | orchestrator | 2026-01-28 00:50:22 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:22.981282 | orchestrator | 2026-01-28 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:26.022606 | orchestrator | 2026-01-28 00:50:26 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state STARTED 2026-01-28 00:50:26.029313 | orchestrator | 2026-01-28 00:50:26 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:26.030879 | orchestrator | 2026-01-28 00:50:26 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:26.030923 | orchestrator | 2026-01-28 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:29.068158 | orchestrator | 2026-01-28 00:50:29 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:50:29.085469 | orchestrator | 2026-01-28 00:50:29 | INFO  | Task e6f1d272-601d-40fb-b1c7-fa3cbf957a17 is in state SUCCESS 2026-01-28 00:50:29.087694 | orchestrator | 2026-01-28 00:50:29.087768 | orchestrator | 2026-01-28 00:50:29.087791 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-01-28 00:50:29.087813 | orchestrator | 2026-01-28 00:50:29.087834 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-01-28 00:50:29.087855 | orchestrator | Wednesday 28 January 2026 00:48:46 +0000 (0:00:00.738) 0:00:00.738 ***** 2026-01-28 00:50:29.087876 | orchestrator | ok: [testbed-manager] 2026-01-28 00:50:29.087897 | orchestrator | 2026-01-28 00:50:29.087918 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-01-28 00:50:29.087938 | orchestrator | Wednesday 28 January 2026 00:48:48 +0000 (0:00:01.723) 0:00:02.461 ***** 2026-01-28 00:50:29.087959 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-01-28 00:50:29.087979 | orchestrator | 2026-01-28 00:50:29.087999 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-01-28 00:50:29.088019 | orchestrator | Wednesday 28 January 2026 00:48:48 +0000 (0:00:00.493) 0:00:02.955 ***** 2026-01-28 00:50:29.088038 | orchestrator | changed: [testbed-manager] 2026-01-28 00:50:29.088058 | orchestrator | 2026-01-28 00:50:29.088078 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-01-28 00:50:29.088099 | orchestrator | Wednesday 28 January 2026 00:48:50 +0000 (0:00:01.918) 0:00:04.873 ***** 2026-01-28 00:50:29.088149 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-01-28 00:50:29.088169 | orchestrator | ok: [testbed-manager] 2026-01-28 00:50:29.088190 | orchestrator | 2026-01-28 00:50:29.088211 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-01-28 00:50:29.088231 | orchestrator | Wednesday 28 January 2026 00:49:53 +0000 (0:01:02.901) 0:01:07.775 ***** 2026-01-28 00:50:29.088252 | orchestrator | changed: [testbed-manager] 2026-01-28 00:50:29.088272 | orchestrator | 2026-01-28 00:50:29.088756 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:50:29.088787 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:50:29.088808 | orchestrator | 2026-01-28 00:50:29.088829 | orchestrator | 2026-01-28 00:50:29.088850 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:50:29.088869 | orchestrator | Wednesday 28 January 2026 00:50:05 +0000 (0:00:11.891) 0:01:19.666 ***** 2026-01-28 00:50:29.088889 | orchestrator | =============================================================================== 2026-01-28 00:50:29.088906 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 62.90s 2026-01-28 00:50:29.088924 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 11.89s 2026-01-28 00:50:29.088979 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.92s 2026-01-28 00:50:29.089000 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.72s 2026-01-28 00:50:29.089019 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.49s 2026-01-28 00:50:29.089039 | orchestrator | 2026-01-28 00:50:29.089057 | orchestrator | 2026-01-28 00:50:29.089077 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-01-28 00:50:29.089096 | orchestrator | 2026-01-28 00:50:29.089144 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-28 00:50:29.089164 | orchestrator | Wednesday 28 January 2026 00:48:16 +0000 (0:00:00.253) 0:00:00.253 ***** 2026-01-28 00:50:29.089184 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:50:29.089751 | orchestrator | 2026-01-28 00:50:29.089782 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-01-28 00:50:29.089801 | orchestrator | Wednesday 28 January 2026 00:48:17 +0000 (0:00:01.244) 0:00:01.498 ***** 2026-01-28 00:50:29.089820 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-28 00:50:29.089839 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-28 00:50:29.089857 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-28 00:50:29.089874 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-28 00:50:29.089886 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-28 00:50:29.089897 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-28 00:50:29.089908 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-28 00:50:29.089920 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-28 00:50:29.089931 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-28 00:50:29.089940 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-28 00:50:29.089950 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-01-28 00:50:29.089959 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-28 00:50:29.089969 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-28 00:50:29.089978 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-28 00:50:29.089988 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-28 00:50:29.089998 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-28 00:50:29.090232 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-28 00:50:29.090285 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-28 00:50:29.090296 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-28 00:50:29.090306 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-01-28 00:50:29.090316 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-01-28 00:50:29.090326 | orchestrator | 2026-01-28 00:50:29.090336 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-01-28 00:50:29.090346 | orchestrator | Wednesday 28 January 2026 00:48:22 +0000 (0:00:04.717) 0:00:06.215 ***** 2026-01-28 00:50:29.090356 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:50:29.090383 | orchestrator | 2026-01-28 00:50:29.090394 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-01-28 00:50:29.090403 | orchestrator | Wednesday 28 January 2026 00:48:23 +0000 (0:00:01.442) 0:00:07.658 ***** 2026-01-28 00:50:29.090428 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.090443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.090454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.090465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.090475 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.090552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.090575 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.090603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.090613 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.090624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.090634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.090644 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.090687 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.090706 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.090725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.090734 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.090748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.090757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.090765 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.090773 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.090782 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.090790 | orchestrator | 2026-01-28 00:50:29.090798 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-01-28 00:50:29.090825 | orchestrator | Wednesday 28 January 2026 00:48:28 +0000 (0:00:04.339) 0:00:11.998 ***** 2026-01-28 00:50:29.090841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-28 00:50:29.090849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.090862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.090871 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-28 00:50:29.090879 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.090888 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.090896 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:50:29.090905 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:50:29.090913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-28 00:50:29.090952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-28 00:50:29.090961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.090974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.090982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.090991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.090999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-28 00:50:29.091007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091029 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:50:29.091042 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:50:29.091050 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:50:29.091058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-28 00:50:29.091067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091087 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:50:29.091095 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-28 00:50:29.091123 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091145 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:50:29.091153 | orchestrator | 2026-01-28 00:50:29.091161 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-01-28 00:50:29.091169 | orchestrator | Wednesday 28 January 2026 00:48:29 +0000 (0:00:01.624) 0:00:13.623 ***** 2026-01-28 00:50:29.091177 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-28 00:50:29.091194 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091203 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091211 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:50:29.091223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-28 00:50:29.091231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091248 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:50:29.091256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-28 00:50:29.091269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-28 00:50:29.091302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-28 00:50:29.091327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091349 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:50:29.091357 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:50:29.091365 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:50:29.091373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-28 00:50:29.091386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091403 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:50:29.091415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-01-28 00:50:29.091515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.091542 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:50:29.091550 | orchestrator | 2026-01-28 00:50:29.091558 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-01-28 00:50:29.091566 | orchestrator | Wednesday 28 January 2026 00:48:32 +0000 (0:00:02.700) 0:00:16.323 ***** 2026-01-28 00:50:29.091574 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:50:29.091649 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:50:29.091658 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:50:29.091666 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:50:29.091673 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:50:29.091681 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:50:29.091689 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:50:29.091697 | orchestrator | 2026-01-28 00:50:29.091705 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-01-28 00:50:29.091713 | orchestrator | Wednesday 28 January 2026 00:48:33 +0000 (0:00:01.112) 0:00:17.435 ***** 2026-01-28 00:50:29.091721 | orchestrator | skipping: [testbed-manager] 2026-01-28 00:50:29.091729 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:50:29.091737 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:50:29.091744 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:50:29.091752 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:50:29.091760 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:50:29.091768 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:50:29.091776 | orchestrator | 2026-01-28 00:50:29.091796 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-01-28 00:50:29.091804 | orchestrator | Wednesday 28 January 2026 00:48:34 +0000 (0:00:01.327) 0:00:18.763 ***** 2026-01-28 00:50:29.091835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.091845 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.091853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.091862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.091880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.091888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.091897 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.091910 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.091924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.091933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.091945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.091959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.091967 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.091975 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.091984 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.091997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.092006 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.092018 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.092033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.092041 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.092050 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.092058 | orchestrator | 2026-01-28 00:50:29.092066 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-01-28 00:50:29.092074 | orchestrator | Wednesday 28 January 2026 00:48:42 +0000 (0:00:07.676) 0:00:26.439 ***** 2026-01-28 00:50:29.092082 | orchestrator | [WARNING]: Skipped 2026-01-28 00:50:29.092091 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-01-28 00:50:29.092099 | orchestrator | to this access issue: 2026-01-28 00:50:29.092135 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-01-28 00:50:29.092143 | orchestrator | directory 2026-01-28 00:50:29.092151 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-28 00:50:29.092159 | orchestrator | 2026-01-28 00:50:29.092167 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-01-28 00:50:29.092175 | orchestrator | Wednesday 28 January 2026 00:48:44 +0000 (0:00:01.679) 0:00:28.119 ***** 2026-01-28 00:50:29.092183 | orchestrator | [WARNING]: Skipped 2026-01-28 00:50:29.092191 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-01-28 00:50:29.092199 | orchestrator | to this access issue: 2026-01-28 00:50:29.092207 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-01-28 00:50:29.092215 | orchestrator | directory 2026-01-28 00:50:29.092223 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-28 00:50:29.092231 | orchestrator | 2026-01-28 00:50:29.092239 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-01-28 00:50:29.092247 | orchestrator | Wednesday 28 January 2026 00:48:45 +0000 (0:00:01.577) 0:00:29.696 ***** 2026-01-28 00:50:29.092255 | orchestrator | [WARNING]: Skipped 2026-01-28 00:50:29.092263 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-01-28 00:50:29.092271 | orchestrator | to this access issue: 2026-01-28 00:50:29.092278 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-01-28 00:50:29.092286 | orchestrator | directory 2026-01-28 00:50:29.092294 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-28 00:50:29.092302 | orchestrator | 2026-01-28 00:50:29.092315 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-01-28 00:50:29.092323 | orchestrator | Wednesday 28 January 2026 00:48:47 +0000 (0:00:01.712) 0:00:31.409 ***** 2026-01-28 00:50:29.092331 | orchestrator | [WARNING]: Skipped 2026-01-28 00:50:29.092339 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-01-28 00:50:29.092353 | orchestrator | to this access issue: 2026-01-28 00:50:29.092361 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-01-28 00:50:29.092369 | orchestrator | directory 2026-01-28 00:50:29.092377 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-28 00:50:29.092385 | orchestrator | 2026-01-28 00:50:29.092393 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-01-28 00:50:29.092401 | orchestrator | Wednesday 28 January 2026 00:48:48 +0000 (0:00:01.310) 0:00:32.719 ***** 2026-01-28 00:50:29.092409 | orchestrator | changed: [testbed-manager] 2026-01-28 00:50:29.092417 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:50:29.092425 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:50:29.092433 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:50:29.092441 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:50:29.092449 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:50:29.092456 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:50:29.092464 | orchestrator | 2026-01-28 00:50:29.092472 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-01-28 00:50:29.092480 | orchestrator | Wednesday 28 January 2026 00:48:53 +0000 (0:00:04.239) 0:00:36.959 ***** 2026-01-28 00:50:29.092489 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-28 00:50:29.092497 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-28 00:50:29.092508 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-28 00:50:29.092517 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-28 00:50:29.092525 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-28 00:50:29.092533 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-28 00:50:29.092593 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-01-28 00:50:29.092602 | orchestrator | 2026-01-28 00:50:29.092610 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-01-28 00:50:29.092618 | orchestrator | Wednesday 28 January 2026 00:48:56 +0000 (0:00:03.305) 0:00:40.264 ***** 2026-01-28 00:50:29.092626 | orchestrator | changed: [testbed-manager] 2026-01-28 00:50:29.092634 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:50:29.092642 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:50:29.092650 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:50:29.092658 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:50:29.092666 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:50:29.092674 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:50:29.092682 | orchestrator | 2026-01-28 00:50:29.092690 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-01-28 00:50:29.092698 | orchestrator | Wednesday 28 January 2026 00:49:00 +0000 (0:00:03.619) 0:00:43.884 ***** 2026-01-28 00:50:29.092706 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.092715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.092731 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.092751 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.092760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.092772 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.092781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.092789 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.092797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.092810 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.092823 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.092832 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.092840 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.092852 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.092861 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.092869 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.092882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:50:29.092890 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.092906 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.092914 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.092923 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.092931 | orchestrator | 2026-01-28 00:50:29.092943 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-01-28 00:50:29.092951 | orchestrator | Wednesday 28 January 2026 00:49:03 +0000 (0:00:03.466) 0:00:47.351 ***** 2026-01-28 00:50:29.092959 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-28 00:50:29.092967 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-28 00:50:29.092975 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-28 00:50:29.092983 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-28 00:50:29.092991 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-28 00:50:29.092998 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-28 00:50:29.093006 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-01-28 00:50:29.093014 | orchestrator | 2026-01-28 00:50:29.093022 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-01-28 00:50:29.093030 | orchestrator | Wednesday 28 January 2026 00:49:05 +0000 (0:00:01.885) 0:00:49.236 ***** 2026-01-28 00:50:29.093043 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-28 00:50:29.093051 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-28 00:50:29.093059 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-28 00:50:29.093067 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-28 00:50:29.093075 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-28 00:50:29.093083 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-28 00:50:29.093091 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-01-28 00:50:29.093099 | orchestrator | 2026-01-28 00:50:29.093124 | orchestrator | TASK [common : Check common containers] **************************************** 2026-01-28 00:50:29.093132 | orchestrator | Wednesday 28 January 2026 00:49:08 +0000 (0:00:02.866) 0:00:52.102 ***** 2026-01-28 00:50:29.093141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.093149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.093163 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.093172 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.093187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.093195 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.093209 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-01-28 00:50:29.093217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.093226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.093241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.093250 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.093263 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.093278 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.093287 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.093363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.093375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.093392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.093402 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.093411 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.093426 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.093443 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:50:29.093452 | orchestrator | 2026-01-28 00:50:29.093461 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-01-28 00:50:29.093471 | orchestrator | Wednesday 28 January 2026 00:49:11 +0000 (0:00:03.401) 0:00:55.504 ***** 2026-01-28 00:50:29.093480 | orchestrator | changed: [testbed-manager] 2026-01-28 00:50:29.093489 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:50:29.093498 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:50:29.093508 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:50:29.093517 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:50:29.093526 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:50:29.093535 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:50:29.093544 | orchestrator | 2026-01-28 00:50:29.093551 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-01-28 00:50:29.093559 | orchestrator | Wednesday 28 January 2026 00:49:13 +0000 (0:00:02.226) 0:00:57.731 ***** 2026-01-28 00:50:29.093567 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:50:29.093575 | orchestrator | changed: [testbed-manager] 2026-01-28 00:50:29.093583 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:50:29.093590 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:50:29.093598 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:50:29.093606 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:50:29.093614 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:50:29.093622 | orchestrator | 2026-01-28 00:50:29.093630 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-28 00:50:29.093638 | orchestrator | Wednesday 28 January 2026 00:49:15 +0000 (0:00:01.639) 0:00:59.370 ***** 2026-01-28 00:50:29.093645 | orchestrator | 2026-01-28 00:50:29.093653 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-28 00:50:29.093661 | orchestrator | Wednesday 28 January 2026 00:49:15 +0000 (0:00:00.062) 0:00:59.433 ***** 2026-01-28 00:50:29.093669 | orchestrator | 2026-01-28 00:50:29.093677 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-28 00:50:29.093684 | orchestrator | Wednesday 28 January 2026 00:49:15 +0000 (0:00:00.059) 0:00:59.492 ***** 2026-01-28 00:50:29.093692 | orchestrator | 2026-01-28 00:50:29.093700 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-28 00:50:29.093708 | orchestrator | Wednesday 28 January 2026 00:49:15 +0000 (0:00:00.208) 0:00:59.700 ***** 2026-01-28 00:50:29.093716 | orchestrator | 2026-01-28 00:50:29.093724 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-28 00:50:29.093731 | orchestrator | Wednesday 28 January 2026 00:49:15 +0000 (0:00:00.083) 0:00:59.784 ***** 2026-01-28 00:50:29.093739 | orchestrator | 2026-01-28 00:50:29.093747 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-28 00:50:29.093755 | orchestrator | Wednesday 28 January 2026 00:49:16 +0000 (0:00:00.094) 0:00:59.878 ***** 2026-01-28 00:50:29.093763 | orchestrator | 2026-01-28 00:50:29.093771 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-01-28 00:50:29.093778 | orchestrator | Wednesday 28 January 2026 00:49:16 +0000 (0:00:00.111) 0:00:59.989 ***** 2026-01-28 00:50:29.093786 | orchestrator | 2026-01-28 00:50:29.093794 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-01-28 00:50:29.093802 | orchestrator | Wednesday 28 January 2026 00:49:16 +0000 (0:00:00.098) 0:01:00.087 ***** 2026-01-28 00:50:29.093818 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:50:29.093826 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:50:29.093834 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:50:29.093842 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:50:29.093850 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:50:29.093858 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:50:29.093865 | orchestrator | changed: [testbed-manager] 2026-01-28 00:50:29.093873 | orchestrator | 2026-01-28 00:50:29.093881 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-01-28 00:50:29.093889 | orchestrator | Wednesday 28 January 2026 00:49:43 +0000 (0:00:26.963) 0:01:27.051 ***** 2026-01-28 00:50:29.093897 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:50:29.093905 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:50:29.093912 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:50:29.093920 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:50:29.093928 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:50:29.093936 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:50:29.093943 | orchestrator | changed: [testbed-manager] 2026-01-28 00:50:29.093951 | orchestrator | 2026-01-28 00:50:29.093959 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-01-28 00:50:29.093967 | orchestrator | Wednesday 28 January 2026 00:50:16 +0000 (0:00:33.287) 0:02:00.338 ***** 2026-01-28 00:50:29.093975 | orchestrator | ok: [testbed-manager] 2026-01-28 00:50:29.093983 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:50:29.093991 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:50:29.093999 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:50:29.094006 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:50:29.094039 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:50:29.094049 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:50:29.094057 | orchestrator | 2026-01-28 00:50:29.094065 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-01-28 00:50:29.094073 | orchestrator | Wednesday 28 January 2026 00:50:18 +0000 (0:00:02.115) 0:02:02.454 ***** 2026-01-28 00:50:29.094085 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:50:29.094093 | orchestrator | changed: [testbed-manager] 2026-01-28 00:50:29.094144 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:50:29.094154 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:50:29.094162 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:50:29.094259 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:50:29.094270 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:50:29.094302 | orchestrator | 2026-01-28 00:50:29.094310 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:50:29.094318 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-28 00:50:29.094327 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-28 00:50:29.094335 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-28 00:50:29.094343 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-28 00:50:29.094351 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-28 00:50:29.094359 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-28 00:50:29.094367 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-01-28 00:50:29.094375 | orchestrator | 2026-01-28 00:50:29.094383 | orchestrator | 2026-01-28 00:50:29.094400 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:50:29.094408 | orchestrator | Wednesday 28 January 2026 00:50:27 +0000 (0:00:09.040) 0:02:11.494 ***** 2026-01-28 00:50:29.094416 | orchestrator | =============================================================================== 2026-01-28 00:50:29.094424 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 33.29s 2026-01-28 00:50:29.094431 | orchestrator | common : Restart fluentd container ------------------------------------- 26.96s 2026-01-28 00:50:29.094439 | orchestrator | common : Restart cron container ----------------------------------------- 9.04s 2026-01-28 00:50:29.094447 | orchestrator | common : Copying over config.json files for services -------------------- 7.68s 2026-01-28 00:50:29.094455 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.72s 2026-01-28 00:50:29.094463 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.34s 2026-01-28 00:50:29.094471 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.24s 2026-01-28 00:50:29.094479 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.62s 2026-01-28 00:50:29.094487 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.47s 2026-01-28 00:50:29.094495 | orchestrator | common : Check common containers ---------------------------------------- 3.40s 2026-01-28 00:50:29.094503 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.31s 2026-01-28 00:50:29.094511 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.87s 2026-01-28 00:50:29.094519 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.70s 2026-01-28 00:50:29.094527 | orchestrator | common : Creating log volume -------------------------------------------- 2.23s 2026-01-28 00:50:29.094541 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.12s 2026-01-28 00:50:29.094550 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.88s 2026-01-28 00:50:29.094557 | orchestrator | common : Find custom fluentd format config files ------------------------ 1.71s 2026-01-28 00:50:29.094565 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.68s 2026-01-28 00:50:29.094573 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.64s 2026-01-28 00:50:29.094581 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.62s 2026-01-28 00:50:29.094589 | orchestrator | 2026-01-28 00:50:29 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:29.094598 | orchestrator | 2026-01-28 00:50:29 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:50:29.094605 | orchestrator | 2026-01-28 00:50:29 | INFO  | Task ae115d55-62b2-49bf-80d3-91d597f012e9 is in state STARTED 2026-01-28 00:50:29.094613 | orchestrator | 2026-01-28 00:50:29 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:29.094621 | orchestrator | 2026-01-28 00:50:29 | INFO  | Task 95fc787d-4365-4e4a-bb51-9d18e90d3174 is in state STARTED 2026-01-28 00:50:29.094629 | orchestrator | 2026-01-28 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:32.129178 | orchestrator | 2026-01-28 00:50:32 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:50:32.129288 | orchestrator | 2026-01-28 00:50:32 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:32.129852 | orchestrator | 2026-01-28 00:50:32 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:50:32.130595 | orchestrator | 2026-01-28 00:50:32 | INFO  | Task ae115d55-62b2-49bf-80d3-91d597f012e9 is in state STARTED 2026-01-28 00:50:32.131036 | orchestrator | 2026-01-28 00:50:32 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:32.131935 | orchestrator | 2026-01-28 00:50:32 | INFO  | Task 95fc787d-4365-4e4a-bb51-9d18e90d3174 is in state STARTED 2026-01-28 00:50:32.132022 | orchestrator | 2026-01-28 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:35.161807 | orchestrator | 2026-01-28 00:50:35 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:50:35.161877 | orchestrator | 2026-01-28 00:50:35 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:35.162496 | orchestrator | 2026-01-28 00:50:35 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:50:35.162909 | orchestrator | 2026-01-28 00:50:35 | INFO  | Task ae115d55-62b2-49bf-80d3-91d597f012e9 is in state STARTED 2026-01-28 00:50:35.163573 | orchestrator | 2026-01-28 00:50:35 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:35.164287 | orchestrator | 2026-01-28 00:50:35 | INFO  | Task 95fc787d-4365-4e4a-bb51-9d18e90d3174 is in state STARTED 2026-01-28 00:50:35.164318 | orchestrator | 2026-01-28 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:38.190658 | orchestrator | 2026-01-28 00:50:38 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:50:38.191385 | orchestrator | 2026-01-28 00:50:38 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:38.191683 | orchestrator | 2026-01-28 00:50:38 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:50:38.192502 | orchestrator | 2026-01-28 00:50:38 | INFO  | Task ae115d55-62b2-49bf-80d3-91d597f012e9 is in state STARTED 2026-01-28 00:50:38.193382 | orchestrator | 2026-01-28 00:50:38 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:38.194933 | orchestrator | 2026-01-28 00:50:38 | INFO  | Task 95fc787d-4365-4e4a-bb51-9d18e90d3174 is in state STARTED 2026-01-28 00:50:38.194986 | orchestrator | 2026-01-28 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:41.214665 | orchestrator | 2026-01-28 00:50:41 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:50:41.215197 | orchestrator | 2026-01-28 00:50:41 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:41.216790 | orchestrator | 2026-01-28 00:50:41 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:50:41.217500 | orchestrator | 2026-01-28 00:50:41 | INFO  | Task ae115d55-62b2-49bf-80d3-91d597f012e9 is in state STARTED 2026-01-28 00:50:41.218147 | orchestrator | 2026-01-28 00:50:41 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:41.218874 | orchestrator | 2026-01-28 00:50:41 | INFO  | Task 95fc787d-4365-4e4a-bb51-9d18e90d3174 is in state STARTED 2026-01-28 00:50:41.218957 | orchestrator | 2026-01-28 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:44.250795 | orchestrator | 2026-01-28 00:50:44 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:50:44.250986 | orchestrator | 2026-01-28 00:50:44 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:44.251670 | orchestrator | 2026-01-28 00:50:44 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:50:44.252586 | orchestrator | 2026-01-28 00:50:44 | INFO  | Task ae115d55-62b2-49bf-80d3-91d597f012e9 is in state STARTED 2026-01-28 00:50:44.254286 | orchestrator | 2026-01-28 00:50:44 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:44.254897 | orchestrator | 2026-01-28 00:50:44 | INFO  | Task 95fc787d-4365-4e4a-bb51-9d18e90d3174 is in state STARTED 2026-01-28 00:50:44.254920 | orchestrator | 2026-01-28 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:47.288677 | orchestrator | 2026-01-28 00:50:47 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:50:47.290820 | orchestrator | 2026-01-28 00:50:47 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:47.291151 | orchestrator | 2026-01-28 00:50:47 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:50:47.291755 | orchestrator | 2026-01-28 00:50:47 | INFO  | Task ae115d55-62b2-49bf-80d3-91d597f012e9 is in state SUCCESS 2026-01-28 00:50:47.292613 | orchestrator | 2026-01-28 00:50:47 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:47.293244 | orchestrator | 2026-01-28 00:50:47 | INFO  | Task 95fc787d-4365-4e4a-bb51-9d18e90d3174 is in state STARTED 2026-01-28 00:50:47.293987 | orchestrator | 2026-01-28 00:50:47 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:50:47.294008 | orchestrator | 2026-01-28 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:50.337865 | orchestrator | 2026-01-28 00:50:50 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:50:50.339629 | orchestrator | 2026-01-28 00:50:50 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:50.341530 | orchestrator | 2026-01-28 00:50:50 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:50:50.341853 | orchestrator | 2026-01-28 00:50:50 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:50.342608 | orchestrator | 2026-01-28 00:50:50 | INFO  | Task 95fc787d-4365-4e4a-bb51-9d18e90d3174 is in state STARTED 2026-01-28 00:50:50.343376 | orchestrator | 2026-01-28 00:50:50 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:50:50.343452 | orchestrator | 2026-01-28 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:53.476590 | orchestrator | 2026-01-28 00:50:53 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:50:53.477414 | orchestrator | 2026-01-28 00:50:53 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:53.479153 | orchestrator | 2026-01-28 00:50:53 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:50:53.479961 | orchestrator | 2026-01-28 00:50:53 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:53.481034 | orchestrator | 2026-01-28 00:50:53 | INFO  | Task 95fc787d-4365-4e4a-bb51-9d18e90d3174 is in state STARTED 2026-01-28 00:50:53.481761 | orchestrator | 2026-01-28 00:50:53 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:50:53.481789 | orchestrator | 2026-01-28 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:56.535659 | orchestrator | 2026-01-28 00:50:56 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:50:56.536037 | orchestrator | 2026-01-28 00:50:56 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:56.536066 | orchestrator | 2026-01-28 00:50:56 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:50:56.536838 | orchestrator | 2026-01-28 00:50:56 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:56.537432 | orchestrator | 2026-01-28 00:50:56 | INFO  | Task 95fc787d-4365-4e4a-bb51-9d18e90d3174 is in state SUCCESS 2026-01-28 00:50:56.539782 | orchestrator | 2026-01-28 00:50:56.539849 | orchestrator | 2026-01-28 00:50:56.539869 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 00:50:56.539890 | orchestrator | 2026-01-28 00:50:56.539902 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 00:50:56.539931 | orchestrator | Wednesday 28 January 2026 00:50:33 +0000 (0:00:00.367) 0:00:00.367 ***** 2026-01-28 00:50:56.539944 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:50:56.539957 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:50:56.539979 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:50:56.539990 | orchestrator | 2026-01-28 00:50:56.540001 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 00:50:56.540012 | orchestrator | Wednesday 28 January 2026 00:50:33 +0000 (0:00:00.323) 0:00:00.691 ***** 2026-01-28 00:50:56.540023 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-01-28 00:50:56.540034 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-01-28 00:50:56.540045 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-01-28 00:50:56.540055 | orchestrator | 2026-01-28 00:50:56.540066 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-01-28 00:50:56.540077 | orchestrator | 2026-01-28 00:50:56.540088 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-01-28 00:50:56.540135 | orchestrator | Wednesday 28 January 2026 00:50:34 +0000 (0:00:00.566) 0:00:01.257 ***** 2026-01-28 00:50:56.540156 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:50:56.540169 | orchestrator | 2026-01-28 00:50:56.540179 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-01-28 00:50:56.540190 | orchestrator | Wednesday 28 January 2026 00:50:35 +0000 (0:00:00.754) 0:00:02.012 ***** 2026-01-28 00:50:56.540201 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-28 00:50:56.540212 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-28 00:50:56.540223 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-28 00:50:56.540233 | orchestrator | 2026-01-28 00:50:56.540244 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-01-28 00:50:56.540255 | orchestrator | Wednesday 28 January 2026 00:50:36 +0000 (0:00:00.758) 0:00:02.770 ***** 2026-01-28 00:50:56.540265 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-01-28 00:50:56.540276 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-01-28 00:50:56.540287 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-01-28 00:50:56.540298 | orchestrator | 2026-01-28 00:50:56.540308 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-01-28 00:50:56.540319 | orchestrator | Wednesday 28 January 2026 00:50:38 +0000 (0:00:02.041) 0:00:04.811 ***** 2026-01-28 00:50:56.540333 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:50:56.540345 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:50:56.540357 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:50:56.540369 | orchestrator | 2026-01-28 00:50:56.540381 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-01-28 00:50:56.540393 | orchestrator | Wednesday 28 January 2026 00:50:39 +0000 (0:00:01.584) 0:00:06.396 ***** 2026-01-28 00:50:56.540406 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:50:56.540418 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:50:56.540430 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:50:56.540443 | orchestrator | 2026-01-28 00:50:56.540454 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:50:56.540465 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:50:56.540478 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:50:56.540505 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:50:56.540516 | orchestrator | 2026-01-28 00:50:56.540526 | orchestrator | 2026-01-28 00:50:56.540537 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:50:56.540548 | orchestrator | Wednesday 28 January 2026 00:50:43 +0000 (0:00:04.156) 0:00:10.553 ***** 2026-01-28 00:50:56.540559 | orchestrator | =============================================================================== 2026-01-28 00:50:56.540569 | orchestrator | memcached : Restart memcached container --------------------------------- 4.16s 2026-01-28 00:50:56.540580 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.04s 2026-01-28 00:50:56.540591 | orchestrator | memcached : Check memcached container ----------------------------------- 1.58s 2026-01-28 00:50:56.540601 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.76s 2026-01-28 00:50:56.540612 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.75s 2026-01-28 00:50:56.540623 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-01-28 00:50:56.540634 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-01-28 00:50:56.540644 | orchestrator | 2026-01-28 00:50:56.540655 | orchestrator | 2026-01-28 00:50:56.540665 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 00:50:56.540676 | orchestrator | 2026-01-28 00:50:56.540687 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 00:50:56.540697 | orchestrator | Wednesday 28 January 2026 00:50:33 +0000 (0:00:00.345) 0:00:00.345 ***** 2026-01-28 00:50:56.540708 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:50:56.540719 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:50:56.540729 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:50:56.540740 | orchestrator | 2026-01-28 00:50:56.540751 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 00:50:56.540776 | orchestrator | Wednesday 28 January 2026 00:50:33 +0000 (0:00:00.493) 0:00:00.838 ***** 2026-01-28 00:50:56.540788 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-01-28 00:50:56.540799 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-01-28 00:50:56.540810 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-01-28 00:50:56.540820 | orchestrator | 2026-01-28 00:50:56.540831 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-01-28 00:50:56.540842 | orchestrator | 2026-01-28 00:50:56.540852 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-01-28 00:50:56.540863 | orchestrator | Wednesday 28 January 2026 00:50:34 +0000 (0:00:00.654) 0:00:01.493 ***** 2026-01-28 00:50:56.540874 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:50:56.540885 | orchestrator | 2026-01-28 00:50:56.540896 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-01-28 00:50:56.540906 | orchestrator | Wednesday 28 January 2026 00:50:34 +0000 (0:00:00.527) 0:00:02.020 ***** 2026-01-28 00:50:56.540925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.540943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.540961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.540973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.540985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541017 | orchestrator | 2026-01-28 00:50:56.541028 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-01-28 00:50:56.541039 | orchestrator | Wednesday 28 January 2026 00:50:36 +0000 (0:00:01.553) 0:00:03.574 ***** 2026-01-28 00:50:56.541055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541158 | orchestrator | 2026-01-28 00:50:56.541169 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-01-28 00:50:56.541180 | orchestrator | Wednesday 28 January 2026 00:50:38 +0000 (0:00:02.688) 0:00:06.262 ***** 2026-01-28 00:50:56.541196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541279 | orchestrator | 2026-01-28 00:50:56.541290 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-01-28 00:50:56.541301 | orchestrator | Wednesday 28 January 2026 00:50:41 +0000 (0:00:02.309) 0:00:08.571 ***** 2026-01-28 00:50:56.541312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-01-28 00:50:56.541397 | orchestrator | 2026-01-28 00:50:56.541407 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-28 00:50:56.541418 | orchestrator | Wednesday 28 January 2026 00:50:42 +0000 (0:00:01.560) 0:00:10.131 ***** 2026-01-28 00:50:56.541429 | orchestrator | 2026-01-28 00:50:56.541440 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-28 00:50:56.541457 | orchestrator | Wednesday 28 January 2026 00:50:43 +0000 (0:00:00.171) 0:00:10.303 ***** 2026-01-28 00:50:56.541468 | orchestrator | 2026-01-28 00:50:56.541479 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-01-28 00:50:56.541489 | orchestrator | Wednesday 28 January 2026 00:50:43 +0000 (0:00:00.090) 0:00:10.394 ***** 2026-01-28 00:50:56.541500 | orchestrator | 2026-01-28 00:50:56.541511 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-01-28 00:50:56.541522 | orchestrator | Wednesday 28 January 2026 00:50:43 +0000 (0:00:00.118) 0:00:10.512 ***** 2026-01-28 00:50:56.541532 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:50:56.541543 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:50:56.541554 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:50:56.541565 | orchestrator | 2026-01-28 00:50:56.541575 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-01-28 00:50:56.541586 | orchestrator | Wednesday 28 January 2026 00:50:47 +0000 (0:00:03.822) 0:00:14.335 ***** 2026-01-28 00:50:56.541597 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:50:56.541608 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:50:56.541618 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:50:56.541629 | orchestrator | 2026-01-28 00:50:56.541640 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:50:56.541651 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:50:56.541662 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:50:56.541673 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:50:56.541684 | orchestrator | 2026-01-28 00:50:56.541695 | orchestrator | 2026-01-28 00:50:56.541705 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:50:56.541716 | orchestrator | Wednesday 28 January 2026 00:50:55 +0000 (0:00:08.309) 0:00:22.645 ***** 2026-01-28 00:50:56.541726 | orchestrator | =============================================================================== 2026-01-28 00:50:56.541737 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.31s 2026-01-28 00:50:56.541748 | orchestrator | redis : Restart redis container ----------------------------------------- 3.82s 2026-01-28 00:50:56.541758 | orchestrator | redis : Copying over default config.json files -------------------------- 2.69s 2026-01-28 00:50:56.541769 | orchestrator | redis : Copying over redis config files --------------------------------- 2.31s 2026-01-28 00:50:56.541780 | orchestrator | redis : Check redis containers ------------------------------------------ 1.56s 2026-01-28 00:50:56.541791 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.55s 2026-01-28 00:50:56.541802 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2026-01-28 00:50:56.541812 | orchestrator | redis : include_tasks --------------------------------------------------- 0.53s 2026-01-28 00:50:56.541823 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2026-01-28 00:50:56.541834 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.38s 2026-01-28 00:50:56.541851 | orchestrator | 2026-01-28 00:50:56 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:50:56.541862 | orchestrator | 2026-01-28 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:50:59.776326 | orchestrator | 2026-01-28 00:50:59 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:50:59.776438 | orchestrator | 2026-01-28 00:50:59 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:50:59.777670 | orchestrator | 2026-01-28 00:50:59 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:50:59.778895 | orchestrator | 2026-01-28 00:50:59 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:50:59.780363 | orchestrator | 2026-01-28 00:50:59 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:50:59.780389 | orchestrator | 2026-01-28 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:02.819520 | orchestrator | 2026-01-28 00:51:02 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:51:02.820364 | orchestrator | 2026-01-28 00:51:02 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:02.821304 | orchestrator | 2026-01-28 00:51:02 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:02.822535 | orchestrator | 2026-01-28 00:51:02 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:02.823756 | orchestrator | 2026-01-28 00:51:02 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:02.823861 | orchestrator | 2026-01-28 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:05.870933 | orchestrator | 2026-01-28 00:51:05 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:51:05.871777 | orchestrator | 2026-01-28 00:51:05 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:05.872645 | orchestrator | 2026-01-28 00:51:05 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:05.873749 | orchestrator | 2026-01-28 00:51:05 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:05.875283 | orchestrator | 2026-01-28 00:51:05 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:05.875314 | orchestrator | 2026-01-28 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:08.905116 | orchestrator | 2026-01-28 00:51:08 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:51:08.906521 | orchestrator | 2026-01-28 00:51:08 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:08.910288 | orchestrator | 2026-01-28 00:51:08 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:08.910820 | orchestrator | 2026-01-28 00:51:08 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:08.912824 | orchestrator | 2026-01-28 00:51:08 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:08.912874 | orchestrator | 2026-01-28 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:11.971030 | orchestrator | 2026-01-28 00:51:11 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:51:11.971176 | orchestrator | 2026-01-28 00:51:11 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:11.971193 | orchestrator | 2026-01-28 00:51:11 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:11.971205 | orchestrator | 2026-01-28 00:51:11 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:11.971216 | orchestrator | 2026-01-28 00:51:11 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:11.971228 | orchestrator | 2026-01-28 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:15.239558 | orchestrator | 2026-01-28 00:51:15 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:51:15.239680 | orchestrator | 2026-01-28 00:51:15 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:15.239693 | orchestrator | 2026-01-28 00:51:15 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:15.240457 | orchestrator | 2026-01-28 00:51:15 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:15.241077 | orchestrator | 2026-01-28 00:51:15 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:15.241133 | orchestrator | 2026-01-28 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:18.274283 | orchestrator | 2026-01-28 00:51:18 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:51:18.275346 | orchestrator | 2026-01-28 00:51:18 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:18.279443 | orchestrator | 2026-01-28 00:51:18 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:18.282908 | orchestrator | 2026-01-28 00:51:18 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:18.283235 | orchestrator | 2026-01-28 00:51:18 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:18.283265 | orchestrator | 2026-01-28 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:21.369639 | orchestrator | 2026-01-28 00:51:21 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:51:21.369731 | orchestrator | 2026-01-28 00:51:21 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:21.370445 | orchestrator | 2026-01-28 00:51:21 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:21.370823 | orchestrator | 2026-01-28 00:51:21 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:21.372031 | orchestrator | 2026-01-28 00:51:21 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:21.372050 | orchestrator | 2026-01-28 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:24.441573 | orchestrator | 2026-01-28 00:51:24 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:51:24.441804 | orchestrator | 2026-01-28 00:51:24 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:24.442173 | orchestrator | 2026-01-28 00:51:24 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:24.444962 | orchestrator | 2026-01-28 00:51:24 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:24.445386 | orchestrator | 2026-01-28 00:51:24 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:24.445405 | orchestrator | 2026-01-28 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:27.481153 | orchestrator | 2026-01-28 00:51:27 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:51:27.481909 | orchestrator | 2026-01-28 00:51:27 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:27.483262 | orchestrator | 2026-01-28 00:51:27 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:27.485860 | orchestrator | 2026-01-28 00:51:27 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:27.488210 | orchestrator | 2026-01-28 00:51:27 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:27.488327 | orchestrator | 2026-01-28 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:30.560581 | orchestrator | 2026-01-28 00:51:30 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:51:30.561593 | orchestrator | 2026-01-28 00:51:30 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:30.562233 | orchestrator | 2026-01-28 00:51:30 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:30.562999 | orchestrator | 2026-01-28 00:51:30 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:30.563587 | orchestrator | 2026-01-28 00:51:30 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:30.563839 | orchestrator | 2026-01-28 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:33.597311 | orchestrator | 2026-01-28 00:51:33 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:51:33.597564 | orchestrator | 2026-01-28 00:51:33 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:33.598913 | orchestrator | 2026-01-28 00:51:33 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:33.599754 | orchestrator | 2026-01-28 00:51:33 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:33.602518 | orchestrator | 2026-01-28 00:51:33 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:33.602567 | orchestrator | 2026-01-28 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:36.626859 | orchestrator | 2026-01-28 00:51:36 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state STARTED 2026-01-28 00:51:36.629136 | orchestrator | 2026-01-28 00:51:36 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:36.631591 | orchestrator | 2026-01-28 00:51:36 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:36.631622 | orchestrator | 2026-01-28 00:51:36 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:36.632163 | orchestrator | 2026-01-28 00:51:36 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:36.632186 | orchestrator | 2026-01-28 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:39.667600 | orchestrator | 2026-01-28 00:51:39 | INFO  | Task fe1f80f0-eb53-4c03-9943-02fa481898b5 is in state SUCCESS 2026-01-28 00:51:39.667680 | orchestrator | 2026-01-28 00:51:39 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:39.669229 | orchestrator | 2026-01-28 00:51:39.669274 | orchestrator | 2026-01-28 00:51:39.669287 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 00:51:39.669301 | orchestrator | 2026-01-28 00:51:39.669313 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 00:51:39.669422 | orchestrator | Wednesday 28 January 2026 00:50:32 +0000 (0:00:00.417) 0:00:00.417 ***** 2026-01-28 00:51:39.669436 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:51:39.669448 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:51:39.669459 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:51:39.669470 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:51:39.669481 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:51:39.669492 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:51:39.669502 | orchestrator | 2026-01-28 00:51:39.669514 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 00:51:39.669525 | orchestrator | Wednesday 28 January 2026 00:50:33 +0000 (0:00:01.046) 0:00:01.463 ***** 2026-01-28 00:51:39.669536 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-28 00:51:39.669568 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-28 00:51:39.669580 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-28 00:51:39.669591 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-28 00:51:39.669602 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-28 00:51:39.669620 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-01-28 00:51:39.669632 | orchestrator | 2026-01-28 00:51:39.669643 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-01-28 00:51:39.669653 | orchestrator | 2026-01-28 00:51:39.669664 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-01-28 00:51:39.669675 | orchestrator | Wednesday 28 January 2026 00:50:34 +0000 (0:00:00.763) 0:00:02.227 ***** 2026-01-28 00:51:39.669687 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:51:39.669700 | orchestrator | 2026-01-28 00:51:39.669711 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-28 00:51:39.669721 | orchestrator | Wednesday 28 January 2026 00:50:36 +0000 (0:00:01.481) 0:00:03.708 ***** 2026-01-28 00:51:39.669732 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-28 00:51:39.669743 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-28 00:51:39.669754 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-28 00:51:39.669765 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-28 00:51:39.669914 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-28 00:51:39.669929 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-28 00:51:39.669940 | orchestrator | 2026-01-28 00:51:39.669952 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-28 00:51:39.669963 | orchestrator | Wednesday 28 January 2026 00:50:37 +0000 (0:00:01.699) 0:00:05.408 ***** 2026-01-28 00:51:39.669974 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-01-28 00:51:39.669986 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-01-28 00:51:39.669997 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-01-28 00:51:39.670008 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-01-28 00:51:39.670135 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-01-28 00:51:39.670148 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-01-28 00:51:39.670159 | orchestrator | 2026-01-28 00:51:39.670170 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-28 00:51:39.670181 | orchestrator | Wednesday 28 January 2026 00:50:39 +0000 (0:00:01.415) 0:00:06.824 ***** 2026-01-28 00:51:39.670192 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-01-28 00:51:39.670203 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:51:39.670214 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-01-28 00:51:39.670225 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:51:39.670236 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-01-28 00:51:39.670246 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:51:39.670257 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-01-28 00:51:39.670268 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:51:39.670279 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-01-28 00:51:39.670289 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:51:39.670300 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-01-28 00:51:39.670311 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:51:39.670322 | orchestrator | 2026-01-28 00:51:39.670333 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-01-28 00:51:39.670354 | orchestrator | Wednesday 28 January 2026 00:50:40 +0000 (0:00:01.049) 0:00:07.873 ***** 2026-01-28 00:51:39.670365 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:51:39.670376 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:51:39.670387 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:51:39.670398 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:51:39.670409 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:51:39.670420 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:51:39.670431 | orchestrator | 2026-01-28 00:51:39.670442 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-01-28 00:51:39.670453 | orchestrator | Wednesday 28 January 2026 00:50:40 +0000 (0:00:00.628) 0:00:08.501 ***** 2026-01-28 00:51:39.670484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670534 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670546 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670573 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670652 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670671 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670684 | orchestrator | 2026-01-28 00:51:39.670697 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-01-28 00:51:39.670710 | orchestrator | Wednesday 28 January 2026 00:50:42 +0000 (0:00:01.297) 0:00:09.799 ***** 2026-01-28 00:51:39.670728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670773 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670787 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670869 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670887 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670899 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-28 00:51:39.670911 | orchestrator | 2026-01-28 00:51:39.670922 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-01-28 00:51:39.670934 | orchestrator | Wednesday 28 January 2026 00:50:45 +0000 (0:00:03.379) 0:00:13.179 ***** 2026-01-28 00:51:39.670945 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:51:39.670956 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:51:39.670967 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:51:39.670982 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:51:39.670993 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:51:39.671004 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:51:39.671015 | orchestrator | 2026-01-28 00:51:39.671026 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-01-28 00:51:39.671037 | orchestrator | Wednesday 28 January 2026 00:50:46 +0000 (0:00:01.326) 0:00:14.505 ***** 2026-01-28 00:51:39.671048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-28 00:51:39.671066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-28 00:51:39.671078 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-28 00:51:39.671147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-28 00:51:39.671161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-28 00:51:39.671178 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-01-28 00:51:39.671189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-28 00:51:39.671212 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-28 00:51:39.671224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-28 00:51:39.671243 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-28 00:51:39.671255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-28 00:51:39.671266 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-01-28 00:51:39.671283 | orchestrator | 2026-01-28 00:51:39.671295 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-28 00:51:39.671309 | orchestrator | Wednesday 28 January 2026 00:50:50 +0000 (0:00:03.325) 0:00:17.830 ***** 2026-01-28 00:51:39.671327 | orchestrator | 2026-01-28 00:51:39.671345 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-28 00:51:39.671364 | orchestrator | Wednesday 28 January 2026 00:50:50 +0000 (0:00:00.146) 0:00:17.976 ***** 2026-01-28 00:51:39.671381 | orchestrator | 2026-01-28 00:51:39.671399 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-28 00:51:39.671417 | orchestrator | Wednesday 28 January 2026 00:50:50 +0000 (0:00:00.128) 0:00:18.105 ***** 2026-01-28 00:51:39.671428 | orchestrator | 2026-01-28 00:51:39.671439 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-28 00:51:39.671449 | orchestrator | Wednesday 28 January 2026 00:50:50 +0000 (0:00:00.134) 0:00:18.240 ***** 2026-01-28 00:51:39.671460 | orchestrator | 2026-01-28 00:51:39.671471 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-28 00:51:39.671482 | orchestrator | Wednesday 28 January 2026 00:50:50 +0000 (0:00:00.284) 0:00:18.524 ***** 2026-01-28 00:51:39.671492 | orchestrator | 2026-01-28 00:51:39.671503 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-01-28 00:51:39.671514 | orchestrator | Wednesday 28 January 2026 00:50:51 +0000 (0:00:00.338) 0:00:18.863 ***** 2026-01-28 00:51:39.671524 | orchestrator | 2026-01-28 00:51:39.671535 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-01-28 00:51:39.671546 | orchestrator | Wednesday 28 January 2026 00:50:51 +0000 (0:00:00.126) 0:00:18.990 ***** 2026-01-28 00:51:39.671556 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:51:39.671570 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:51:39.671589 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:51:39.671608 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:51:39.671627 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:51:39.671646 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:51:39.671665 | orchestrator | 2026-01-28 00:51:39.671685 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-01-28 00:51:39.671705 | orchestrator | Wednesday 28 January 2026 00:51:01 +0000 (0:00:09.990) 0:00:28.981 ***** 2026-01-28 00:51:39.671725 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:51:39.671738 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:51:39.671748 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:51:39.671759 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:51:39.671772 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:51:39.671790 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:51:39.671809 | orchestrator | 2026-01-28 00:51:39.671822 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-28 00:51:39.671833 | orchestrator | Wednesday 28 January 2026 00:51:02 +0000 (0:00:01.299) 0:00:30.280 ***** 2026-01-28 00:51:39.671843 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:51:39.671854 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:51:39.671865 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:51:39.671876 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:51:39.671886 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:51:39.671897 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:51:39.671908 | orchestrator | 2026-01-28 00:51:39.671919 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-01-28 00:51:39.671929 | orchestrator | Wednesday 28 January 2026 00:51:12 +0000 (0:00:09.932) 0:00:40.213 ***** 2026-01-28 00:51:39.671947 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-01-28 00:51:39.671959 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-01-28 00:51:39.671982 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-01-28 00:51:39.671993 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-01-28 00:51:39.672004 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-01-28 00:51:39.672015 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-01-28 00:51:39.672025 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-01-28 00:51:39.672036 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-01-28 00:51:39.672047 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-01-28 00:51:39.672077 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-01-28 00:51:39.672125 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-01-28 00:51:39.672144 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-01-28 00:51:39.672164 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-28 00:51:39.672185 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-28 00:51:39.672204 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-28 00:51:39.672223 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-28 00:51:39.672239 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-28 00:51:39.672250 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-01-28 00:51:39.672261 | orchestrator | 2026-01-28 00:51:39.672272 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-01-28 00:51:39.672282 | orchestrator | Wednesday 28 January 2026 00:51:21 +0000 (0:00:09.113) 0:00:49.326 ***** 2026-01-28 00:51:39.672294 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-01-28 00:51:39.672304 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:51:39.672315 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-01-28 00:51:39.672326 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:51:39.672337 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-01-28 00:51:39.672347 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:51:39.672358 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-01-28 00:51:39.672369 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-01-28 00:51:39.672379 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-01-28 00:51:39.672390 | orchestrator | 2026-01-28 00:51:39.672401 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-01-28 00:51:39.672412 | orchestrator | Wednesday 28 January 2026 00:51:23 +0000 (0:00:02.170) 0:00:51.497 ***** 2026-01-28 00:51:39.672423 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-01-28 00:51:39.672433 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:51:39.672444 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-01-28 00:51:39.672455 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:51:39.672466 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-01-28 00:51:39.672476 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:51:39.672497 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-01-28 00:51:39.672507 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-01-28 00:51:39.672518 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-01-28 00:51:39.672529 | orchestrator | 2026-01-28 00:51:39.672539 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-01-28 00:51:39.672550 | orchestrator | Wednesday 28 January 2026 00:51:27 +0000 (0:00:03.449) 0:00:54.947 ***** 2026-01-28 00:51:39.672561 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:51:39.672571 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:51:39.672582 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:51:39.672593 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:51:39.672603 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:51:39.672614 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:51:39.672624 | orchestrator | 2026-01-28 00:51:39.672635 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:51:39.672647 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-28 00:51:39.672666 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-28 00:51:39.672678 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-28 00:51:39.672689 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-28 00:51:39.672700 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-28 00:51:39.672711 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-28 00:51:39.672721 | orchestrator | 2026-01-28 00:51:39.672732 | orchestrator | 2026-01-28 00:51:39.672743 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:51:39.672754 | orchestrator | Wednesday 28 January 2026 00:51:37 +0000 (0:00:09.951) 0:01:04.899 ***** 2026-01-28 00:51:39.672765 | orchestrator | =============================================================================== 2026-01-28 00:51:39.672775 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.88s 2026-01-28 00:51:39.672792 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.99s 2026-01-28 00:51:39.672803 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 9.11s 2026-01-28 00:51:39.672814 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.45s 2026-01-28 00:51:39.672825 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.38s 2026-01-28 00:51:39.672836 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.33s 2026-01-28 00:51:39.672846 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.17s 2026-01-28 00:51:39.672857 | orchestrator | module-load : Load modules ---------------------------------------------- 1.70s 2026-01-28 00:51:39.672868 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.48s 2026-01-28 00:51:39.672878 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.42s 2026-01-28 00:51:39.672889 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.33s 2026-01-28 00:51:39.672900 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.30s 2026-01-28 00:51:39.672910 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.30s 2026-01-28 00:51:39.672921 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.16s 2026-01-28 00:51:39.672941 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.05s 2026-01-28 00:51:39.672961 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.05s 2026-01-28 00:51:39.672980 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.76s 2026-01-28 00:51:39.672992 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.63s 2026-01-28 00:51:39.673003 | orchestrator | 2026-01-28 00:51:39 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:39.673013 | orchestrator | 2026-01-28 00:51:39 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:39.673024 | orchestrator | 2026-01-28 00:51:39 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:51:39.673035 | orchestrator | 2026-01-28 00:51:39 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:39.673046 | orchestrator | 2026-01-28 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:42.699202 | orchestrator | 2026-01-28 00:51:42 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:42.699290 | orchestrator | 2026-01-28 00:51:42 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:42.699321 | orchestrator | 2026-01-28 00:51:42 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:42.701318 | orchestrator | 2026-01-28 00:51:42 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:51:42.702263 | orchestrator | 2026-01-28 00:51:42 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:42.702333 | orchestrator | 2026-01-28 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:45.728581 | orchestrator | 2026-01-28 00:51:45 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:45.729225 | orchestrator | 2026-01-28 00:51:45 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:45.730290 | orchestrator | 2026-01-28 00:51:45 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:45.730602 | orchestrator | 2026-01-28 00:51:45 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:51:45.732782 | orchestrator | 2026-01-28 00:51:45 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:45.732810 | orchestrator | 2026-01-28 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:48.759040 | orchestrator | 2026-01-28 00:51:48 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:48.760002 | orchestrator | 2026-01-28 00:51:48 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:48.761613 | orchestrator | 2026-01-28 00:51:48 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:48.762709 | orchestrator | 2026-01-28 00:51:48 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:51:48.763936 | orchestrator | 2026-01-28 00:51:48 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:48.763966 | orchestrator | 2026-01-28 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:51.800268 | orchestrator | 2026-01-28 00:51:51 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:51.800495 | orchestrator | 2026-01-28 00:51:51 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:51.801365 | orchestrator | 2026-01-28 00:51:51 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:51.802150 | orchestrator | 2026-01-28 00:51:51 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:51:51.802783 | orchestrator | 2026-01-28 00:51:51 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:51.802807 | orchestrator | 2026-01-28 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:54.827865 | orchestrator | 2026-01-28 00:51:54 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:54.830265 | orchestrator | 2026-01-28 00:51:54 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:54.831161 | orchestrator | 2026-01-28 00:51:54 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:54.831869 | orchestrator | 2026-01-28 00:51:54 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:51:54.834010 | orchestrator | 2026-01-28 00:51:54 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:54.834154 | orchestrator | 2026-01-28 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:51:57.865624 | orchestrator | 2026-01-28 00:51:57 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:51:57.865923 | orchestrator | 2026-01-28 00:51:57 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:51:57.866648 | orchestrator | 2026-01-28 00:51:57 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:51:57.867278 | orchestrator | 2026-01-28 00:51:57 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:51:57.868224 | orchestrator | 2026-01-28 00:51:57 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:51:57.868273 | orchestrator | 2026-01-28 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:00.900401 | orchestrator | 2026-01-28 00:52:00 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:00.900985 | orchestrator | 2026-01-28 00:52:00 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:00.901656 | orchestrator | 2026-01-28 00:52:00 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:52:00.902462 | orchestrator | 2026-01-28 00:52:00 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:00.903802 | orchestrator | 2026-01-28 00:52:00 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:00.903833 | orchestrator | 2026-01-28 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:03.932762 | orchestrator | 2026-01-28 00:52:03 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:03.936921 | orchestrator | 2026-01-28 00:52:03 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:03.936981 | orchestrator | 2026-01-28 00:52:03 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:52:03.938113 | orchestrator | 2026-01-28 00:52:03 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:03.939474 | orchestrator | 2026-01-28 00:52:03 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:03.939919 | orchestrator | 2026-01-28 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:06.980057 | orchestrator | 2026-01-28 00:52:06 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:06.981007 | orchestrator | 2026-01-28 00:52:06 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:06.983022 | orchestrator | 2026-01-28 00:52:06 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:52:06.983448 | orchestrator | 2026-01-28 00:52:06 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:06.985921 | orchestrator | 2026-01-28 00:52:06 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:06.986005 | orchestrator | 2026-01-28 00:52:06 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:10.030678 | orchestrator | 2026-01-28 00:52:10 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:10.030795 | orchestrator | 2026-01-28 00:52:10 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:10.032171 | orchestrator | 2026-01-28 00:52:10 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:52:10.035553 | orchestrator | 2026-01-28 00:52:10 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:10.036296 | orchestrator | 2026-01-28 00:52:10 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:10.036335 | orchestrator | 2026-01-28 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:13.077128 | orchestrator | 2026-01-28 00:52:13 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:13.077202 | orchestrator | 2026-01-28 00:52:13 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:13.077214 | orchestrator | 2026-01-28 00:52:13 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:52:13.077224 | orchestrator | 2026-01-28 00:52:13 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:13.077234 | orchestrator | 2026-01-28 00:52:13 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:13.077244 | orchestrator | 2026-01-28 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:16.162759 | orchestrator | 2026-01-28 00:52:16 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:16.162956 | orchestrator | 2026-01-28 00:52:16 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:16.164912 | orchestrator | 2026-01-28 00:52:16 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:52:16.173461 | orchestrator | 2026-01-28 00:52:16 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:16.173544 | orchestrator | 2026-01-28 00:52:16 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:16.173560 | orchestrator | 2026-01-28 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:19.201444 | orchestrator | 2026-01-28 00:52:19 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:19.201862 | orchestrator | 2026-01-28 00:52:19 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:19.202969 | orchestrator | 2026-01-28 00:52:19 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:52:19.203726 | orchestrator | 2026-01-28 00:52:19 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:19.204507 | orchestrator | 2026-01-28 00:52:19 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:19.204681 | orchestrator | 2026-01-28 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:22.252189 | orchestrator | 2026-01-28 00:52:22 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:22.253569 | orchestrator | 2026-01-28 00:52:22 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:22.258891 | orchestrator | 2026-01-28 00:52:22 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:52:22.261232 | orchestrator | 2026-01-28 00:52:22 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:22.261989 | orchestrator | 2026-01-28 00:52:22 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:22.262104 | orchestrator | 2026-01-28 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:25.339667 | orchestrator | 2026-01-28 00:52:25 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:25.340366 | orchestrator | 2026-01-28 00:52:25 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:25.341885 | orchestrator | 2026-01-28 00:52:25 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:52:25.343457 | orchestrator | 2026-01-28 00:52:25 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:25.345726 | orchestrator | 2026-01-28 00:52:25 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:25.345795 | orchestrator | 2026-01-28 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:28.402209 | orchestrator | 2026-01-28 00:52:28 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:28.403694 | orchestrator | 2026-01-28 00:52:28 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:28.404600 | orchestrator | 2026-01-28 00:52:28 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:52:28.406004 | orchestrator | 2026-01-28 00:52:28 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:28.406940 | orchestrator | 2026-01-28 00:52:28 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:28.407171 | orchestrator | 2026-01-28 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:31.562384 | orchestrator | 2026-01-28 00:52:31 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:31.562481 | orchestrator | 2026-01-28 00:52:31 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:31.562495 | orchestrator | 2026-01-28 00:52:31 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:52:31.562507 | orchestrator | 2026-01-28 00:52:31 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:31.562518 | orchestrator | 2026-01-28 00:52:31 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:31.562529 | orchestrator | 2026-01-28 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:34.659892 | orchestrator | 2026-01-28 00:52:34 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:34.659980 | orchestrator | 2026-01-28 00:52:34 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:34.659993 | orchestrator | 2026-01-28 00:52:34 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:52:34.660026 | orchestrator | 2026-01-28 00:52:34 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:34.660035 | orchestrator | 2026-01-28 00:52:34 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:34.660044 | orchestrator | 2026-01-28 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:37.771167 | orchestrator | 2026-01-28 00:52:37 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:37.771759 | orchestrator | 2026-01-28 00:52:37 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:37.772732 | orchestrator | 2026-01-28 00:52:37 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:52:37.773930 | orchestrator | 2026-01-28 00:52:37 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:37.774579 | orchestrator | 2026-01-28 00:52:37 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:37.774845 | orchestrator | 2026-01-28 00:52:37 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:40.803273 | orchestrator | 2026-01-28 00:52:40 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:40.805255 | orchestrator | 2026-01-28 00:52:40 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:40.806962 | orchestrator | 2026-01-28 00:52:40 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state STARTED 2026-01-28 00:52:40.809428 | orchestrator | 2026-01-28 00:52:40 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:40.810983 | orchestrator | 2026-01-28 00:52:40 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:40.811936 | orchestrator | 2026-01-28 00:52:40 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:43.837198 | orchestrator | 2026-01-28 00:52:43 | INFO  | Task d7e050b8-07ad-4632-8042-aaf5aa5a56d7 is in state STARTED 2026-01-28 00:52:43.837427 | orchestrator | 2026-01-28 00:52:43 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:43.837950 | orchestrator | 2026-01-28 00:52:43 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:43.842309 | orchestrator | 2026-01-28 00:52:43.842371 | orchestrator | 2026-01-28 00:52:43 | INFO  | Task a14b0f63-5775-4240-a749-dbc23b4cd98d is in state SUCCESS 2026-01-28 00:52:43.843447 | orchestrator | 2026-01-28 00:52:43.843485 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-01-28 00:52:43.843498 | orchestrator | 2026-01-28 00:52:43.843526 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-01-28 00:52:43.843538 | orchestrator | Wednesday 28 January 2026 00:48:17 +0000 (0:00:00.178) 0:00:00.178 ***** 2026-01-28 00:52:43.843550 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:52:43.843562 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:52:43.843573 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:52:43.843583 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.843594 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.843605 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.843616 | orchestrator | 2026-01-28 00:52:43.843627 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-01-28 00:52:43.843638 | orchestrator | Wednesday 28 January 2026 00:48:17 +0000 (0:00:00.746) 0:00:00.925 ***** 2026-01-28 00:52:43.843773 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:52:43.843787 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:52:43.843798 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:52:43.843809 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.843845 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.843856 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.843867 | orchestrator | 2026-01-28 00:52:43.843878 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-01-28 00:52:43.843889 | orchestrator | Wednesday 28 January 2026 00:48:18 +0000 (0:00:00.670) 0:00:01.595 ***** 2026-01-28 00:52:43.843900 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:52:43.843911 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:52:43.843921 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:52:43.843932 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.843942 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.843953 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.843964 | orchestrator | 2026-01-28 00:52:43.843974 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-01-28 00:52:43.843985 | orchestrator | Wednesday 28 January 2026 00:48:19 +0000 (0:00:00.757) 0:00:02.353 ***** 2026-01-28 00:52:43.843996 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:52:43.844006 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:52:43.844017 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:52:43.844033 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:52:43.844053 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:52:43.844106 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:52:43.844125 | orchestrator | 2026-01-28 00:52:43.844142 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-01-28 00:52:43.844159 | orchestrator | Wednesday 28 January 2026 00:48:22 +0000 (0:00:02.994) 0:00:05.347 ***** 2026-01-28 00:52:43.844175 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:52:43.844192 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:52:43.844209 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:52:43.844227 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:52:43.844245 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:52:43.844264 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:52:43.844283 | orchestrator | 2026-01-28 00:52:43.844302 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-01-28 00:52:43.844322 | orchestrator | Wednesday 28 January 2026 00:48:23 +0000 (0:00:01.572) 0:00:06.920 ***** 2026-01-28 00:52:43.844340 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:52:43.844357 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:52:43.844376 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:52:43.844388 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:52:43.844399 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:52:43.844410 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:52:43.844420 | orchestrator | 2026-01-28 00:52:43.844431 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-01-28 00:52:43.844442 | orchestrator | Wednesday 28 January 2026 00:48:25 +0000 (0:00:01.757) 0:00:08.678 ***** 2026-01-28 00:52:43.844452 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:52:43.844463 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:52:43.844473 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:52:43.844484 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.844495 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.844505 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.844515 | orchestrator | 2026-01-28 00:52:43.844526 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-01-28 00:52:43.844537 | orchestrator | Wednesday 28 January 2026 00:48:26 +0000 (0:00:00.911) 0:00:09.589 ***** 2026-01-28 00:52:43.844548 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:52:43.844558 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:52:43.844569 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:52:43.844579 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.844590 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.844600 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.844611 | orchestrator | 2026-01-28 00:52:43.844633 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-01-28 00:52:43.844653 | orchestrator | Wednesday 28 January 2026 00:48:27 +0000 (0:00:00.708) 0:00:10.298 ***** 2026-01-28 00:52:43.844672 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-28 00:52:43.844690 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-28 00:52:43.844708 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:52:43.844726 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-28 00:52:43.844745 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-28 00:52:43.844764 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:52:43.844775 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-28 00:52:43.844884 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-28 00:52:43.844895 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:52:43.844906 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-28 00:52:43.844930 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-28 00:52:43.844942 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.844962 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-28 00:52:43.844973 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-28 00:52:43.844984 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.844994 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-28 00:52:43.845005 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-28 00:52:43.845016 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.845026 | orchestrator | 2026-01-28 00:52:43.845037 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-01-28 00:52:43.845048 | orchestrator | Wednesday 28 January 2026 00:48:28 +0000 (0:00:01.251) 0:00:11.550 ***** 2026-01-28 00:52:43.845098 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:52:43.845113 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:52:43.845123 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:52:43.845134 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.845144 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.845155 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.845166 | orchestrator | 2026-01-28 00:52:43.845177 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-01-28 00:52:43.845189 | orchestrator | Wednesday 28 January 2026 00:48:29 +0000 (0:00:01.369) 0:00:12.919 ***** 2026-01-28 00:52:43.845200 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:52:43.845211 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:52:43.845222 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:52:43.845232 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.845243 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.845253 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.845264 | orchestrator | 2026-01-28 00:52:43.845275 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-01-28 00:52:43.845285 | orchestrator | Wednesday 28 January 2026 00:48:30 +0000 (0:00:00.832) 0:00:13.752 ***** 2026-01-28 00:52:43.845296 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:52:43.845307 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:52:43.845317 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:52:43.845328 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:52:43.845338 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:52:43.845349 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:52:43.845360 | orchestrator | 2026-01-28 00:52:43.845370 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-01-28 00:52:43.845381 | orchestrator | Wednesday 28 January 2026 00:48:35 +0000 (0:00:05.260) 0:00:19.012 ***** 2026-01-28 00:52:43.845402 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:52:43.845413 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:52:43.845423 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:52:43.845434 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.845445 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.845455 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.845466 | orchestrator | 2026-01-28 00:52:43.845606 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-01-28 00:52:43.845622 | orchestrator | Wednesday 28 January 2026 00:48:38 +0000 (0:00:02.812) 0:00:21.825 ***** 2026-01-28 00:52:43.845632 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:52:43.845643 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:52:43.845653 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:52:43.845665 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.845675 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.845686 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.845697 | orchestrator | 2026-01-28 00:52:43.845708 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-01-28 00:52:43.845720 | orchestrator | Wednesday 28 January 2026 00:48:41 +0000 (0:00:02.701) 0:00:24.526 ***** 2026-01-28 00:52:43.845731 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:52:43.845742 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:52:43.845752 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:52:43.845765 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.845784 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.845802 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.845820 | orchestrator | 2026-01-28 00:52:43.845838 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-01-28 00:52:43.845856 | orchestrator | Wednesday 28 January 2026 00:48:42 +0000 (0:00:01.071) 0:00:25.598 ***** 2026-01-28 00:52:43.845873 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-01-28 00:52:43.845891 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-01-28 00:52:43.845910 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:52:43.845928 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-01-28 00:52:43.845947 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-01-28 00:52:43.845966 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:52:43.845985 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-01-28 00:52:43.846004 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-01-28 00:52:43.846108 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:52:43.846129 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-01-28 00:52:43.846146 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-01-28 00:52:43.846162 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.846178 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-01-28 00:52:43.846195 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-01-28 00:52:43.846212 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.846229 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-01-28 00:52:43.846249 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-01-28 00:52:43.846268 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.846286 | orchestrator | 2026-01-28 00:52:43.846305 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-01-28 00:52:43.846345 | orchestrator | Wednesday 28 January 2026 00:48:44 +0000 (0:00:01.609) 0:00:27.207 ***** 2026-01-28 00:52:43.846365 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:52:43.846401 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:52:43.846420 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:52:43.846438 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.846457 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.846479 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.846490 | orchestrator | 2026-01-28 00:52:43.846501 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-01-28 00:52:43.846512 | orchestrator | Wednesday 28 January 2026 00:48:44 +0000 (0:00:00.820) 0:00:28.028 ***** 2026-01-28 00:52:43.846522 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:52:43.846533 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:52:43.846543 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:52:43.846554 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.846565 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.846575 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.846586 | orchestrator | 2026-01-28 00:52:43.846597 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-01-28 00:52:43.846607 | orchestrator | 2026-01-28 00:52:43.846618 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-01-28 00:52:43.846629 | orchestrator | Wednesday 28 January 2026 00:48:46 +0000 (0:00:01.967) 0:00:29.996 ***** 2026-01-28 00:52:43.846639 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.846650 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.846661 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.846671 | orchestrator | 2026-01-28 00:52:43.846682 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-01-28 00:52:43.846692 | orchestrator | Wednesday 28 January 2026 00:48:48 +0000 (0:00:01.793) 0:00:31.789 ***** 2026-01-28 00:52:43.846703 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.846713 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.846724 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.846734 | orchestrator | 2026-01-28 00:52:43.846745 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-01-28 00:52:43.846756 | orchestrator | Wednesday 28 January 2026 00:48:50 +0000 (0:00:01.335) 0:00:33.125 ***** 2026-01-28 00:52:43.846766 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.846777 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.846787 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.846798 | orchestrator | 2026-01-28 00:52:43.846808 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-01-28 00:52:43.846819 | orchestrator | Wednesday 28 January 2026 00:48:51 +0000 (0:00:01.389) 0:00:34.515 ***** 2026-01-28 00:52:43.846829 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.846840 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.846851 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.846861 | orchestrator | 2026-01-28 00:52:43.846872 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-01-28 00:52:43.846883 | orchestrator | Wednesday 28 January 2026 00:48:52 +0000 (0:00:01.228) 0:00:35.743 ***** 2026-01-28 00:52:43.846898 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.846916 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.846934 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.846952 | orchestrator | 2026-01-28 00:52:43.846969 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-01-28 00:52:43.846988 | orchestrator | Wednesday 28 January 2026 00:48:53 +0000 (0:00:00.501) 0:00:36.244 ***** 2026-01-28 00:52:43.847006 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:52:43.847024 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:52:43.847044 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:52:43.847205 | orchestrator | 2026-01-28 00:52:43.847249 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-01-28 00:52:43.847262 | orchestrator | Wednesday 28 January 2026 00:48:54 +0000 (0:00:01.397) 0:00:37.642 ***** 2026-01-28 00:52:43.847273 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:52:43.847284 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:52:43.847294 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:52:43.847305 | orchestrator | 2026-01-28 00:52:43.847316 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-01-28 00:52:43.847339 | orchestrator | Wednesday 28 January 2026 00:48:56 +0000 (0:00:01.698) 0:00:39.341 ***** 2026-01-28 00:52:43.847350 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:52:43.847361 | orchestrator | 2026-01-28 00:52:43.847372 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-01-28 00:52:43.847383 | orchestrator | Wednesday 28 January 2026 00:48:56 +0000 (0:00:00.433) 0:00:39.774 ***** 2026-01-28 00:52:43.847393 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.847404 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.847415 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.847426 | orchestrator | 2026-01-28 00:52:43.847436 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-01-28 00:52:43.847447 | orchestrator | Wednesday 28 January 2026 00:48:58 +0000 (0:00:02.054) 0:00:41.829 ***** 2026-01-28 00:52:43.847458 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.847468 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:52:43.847479 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.847490 | orchestrator | 2026-01-28 00:52:43.847501 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-01-28 00:52:43.847511 | orchestrator | Wednesday 28 January 2026 00:48:59 +0000 (0:00:00.775) 0:00:42.604 ***** 2026-01-28 00:52:43.847522 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.847533 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.847543 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:52:43.847554 | orchestrator | 2026-01-28 00:52:43.847562 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-01-28 00:52:43.847570 | orchestrator | Wednesday 28 January 2026 00:49:00 +0000 (0:00:00.856) 0:00:43.461 ***** 2026-01-28 00:52:43.847578 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.847586 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.847593 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:52:43.847601 | orchestrator | 2026-01-28 00:52:43.847609 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-01-28 00:52:43.847628 | orchestrator | Wednesday 28 January 2026 00:49:01 +0000 (0:00:01.550) 0:00:45.012 ***** 2026-01-28 00:52:43.847642 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.847650 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.847658 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.847666 | orchestrator | 2026-01-28 00:52:43.847674 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-01-28 00:52:43.847682 | orchestrator | Wednesday 28 January 2026 00:49:02 +0000 (0:00:00.488) 0:00:45.500 ***** 2026-01-28 00:52:43.847690 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.847697 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.847705 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.847713 | orchestrator | 2026-01-28 00:52:43.847721 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-01-28 00:52:43.847729 | orchestrator | Wednesday 28 January 2026 00:49:02 +0000 (0:00:00.305) 0:00:45.806 ***** 2026-01-28 00:52:43.847736 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:52:43.847744 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:52:43.847752 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:52:43.847760 | orchestrator | 2026-01-28 00:52:43.847768 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-01-28 00:52:43.847775 | orchestrator | Wednesday 28 January 2026 00:49:03 +0000 (0:00:01.039) 0:00:46.846 ***** 2026-01-28 00:52:43.847783 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.847791 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.847798 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.847806 | orchestrator | 2026-01-28 00:52:43.847814 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-01-28 00:52:43.847822 | orchestrator | Wednesday 28 January 2026 00:49:05 +0000 (0:00:02.156) 0:00:49.002 ***** 2026-01-28 00:52:43.847836 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.847844 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.847851 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.847859 | orchestrator | 2026-01-28 00:52:43.847867 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-01-28 00:52:43.847875 | orchestrator | Wednesday 28 January 2026 00:49:06 +0000 (0:00:00.845) 0:00:49.847 ***** 2026-01-28 00:52:43.847883 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-28 00:52:43.847892 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-28 00:52:43.847900 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-01-28 00:52:43.847908 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-28 00:52:43.847916 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-28 00:52:43.847924 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-01-28 00:52:43.847931 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-28 00:52:43.847939 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-28 00:52:43.847947 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-01-28 00:52:43.847955 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-28 00:52:43.847962 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-28 00:52:43.847970 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-01-28 00:52:43.847978 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.847986 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.847994 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.848002 | orchestrator | 2026-01-28 00:52:43.848010 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-01-28 00:52:43.848018 | orchestrator | Wednesday 28 January 2026 00:49:49 +0000 (0:00:43.155) 0:01:33.003 ***** 2026-01-28 00:52:43.848025 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.848047 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.848080 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.848092 | orchestrator | 2026-01-28 00:52:43.848100 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-01-28 00:52:43.848108 | orchestrator | Wednesday 28 January 2026 00:49:50 +0000 (0:00:00.891) 0:01:33.894 ***** 2026-01-28 00:52:43.848116 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:52:43.848123 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:52:43.848131 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:52:43.848139 | orchestrator | 2026-01-28 00:52:43.848147 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-01-28 00:52:43.848155 | orchestrator | Wednesday 28 January 2026 00:49:52 +0000 (0:00:01.468) 0:01:35.363 ***** 2026-01-28 00:52:43.848162 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:52:43.848170 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:52:43.848178 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:52:43.848195 | orchestrator | 2026-01-28 00:52:43.848207 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-01-28 00:52:43.848219 | orchestrator | Wednesday 28 January 2026 00:49:54 +0000 (0:00:01.870) 0:01:37.233 ***** 2026-01-28 00:52:43.848227 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:52:43.848235 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:52:43.848243 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:52:43.848251 | orchestrator | 2026-01-28 00:52:43.848258 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-01-28 00:52:43.848266 | orchestrator | Wednesday 28 January 2026 00:50:19 +0000 (0:00:24.918) 0:02:02.151 ***** 2026-01-28 00:52:43.848274 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.848282 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.848289 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.848297 | orchestrator | 2026-01-28 00:52:43.848305 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-01-28 00:52:43.848313 | orchestrator | Wednesday 28 January 2026 00:50:19 +0000 (0:00:00.674) 0:02:02.826 ***** 2026-01-28 00:52:43.848321 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.848328 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.848336 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.848344 | orchestrator | 2026-01-28 00:52:43.848351 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-01-28 00:52:43.848359 | orchestrator | Wednesday 28 January 2026 00:50:21 +0000 (0:00:01.631) 0:02:04.458 ***** 2026-01-28 00:52:43.848367 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:52:43.848375 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:52:43.848382 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:52:43.848390 | orchestrator | 2026-01-28 00:52:43.848398 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-01-28 00:52:43.848406 | orchestrator | Wednesday 28 January 2026 00:50:22 +0000 (0:00:00.642) 0:02:05.101 ***** 2026-01-28 00:52:43.848413 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.848421 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.848429 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.848437 | orchestrator | 2026-01-28 00:52:43.848445 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-01-28 00:52:43.848452 | orchestrator | Wednesday 28 January 2026 00:50:22 +0000 (0:00:00.834) 0:02:05.935 ***** 2026-01-28 00:52:43.848460 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.848468 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.848476 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.848483 | orchestrator | 2026-01-28 00:52:43.848491 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-01-28 00:52:43.848499 | orchestrator | Wednesday 28 January 2026 00:50:23 +0000 (0:00:00.275) 0:02:06.210 ***** 2026-01-28 00:52:43.848507 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:52:43.848514 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:52:43.848522 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:52:43.848530 | orchestrator | 2026-01-28 00:52:43.848538 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-01-28 00:52:43.848545 | orchestrator | Wednesday 28 January 2026 00:50:23 +0000 (0:00:00.658) 0:02:06.869 ***** 2026-01-28 00:52:43.848553 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:52:43.848561 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:52:43.848569 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:52:43.848576 | orchestrator | 2026-01-28 00:52:43.848584 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-01-28 00:52:43.848592 | orchestrator | Wednesday 28 January 2026 00:50:24 +0000 (0:00:00.646) 0:02:07.515 ***** 2026-01-28 00:52:43.848600 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:52:43.848608 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:52:43.848615 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:52:43.848623 | orchestrator | 2026-01-28 00:52:43.848631 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-01-28 00:52:43.848644 | orchestrator | Wednesday 28 January 2026 00:50:25 +0000 (0:00:00.970) 0:02:08.486 ***** 2026-01-28 00:52:43.848652 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:52:43.848660 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:52:43.848668 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:52:43.848675 | orchestrator | 2026-01-28 00:52:43.848683 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-01-28 00:52:43.848691 | orchestrator | Wednesday 28 January 2026 00:50:26 +0000 (0:00:00.715) 0:02:09.202 ***** 2026-01-28 00:52:43.848699 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.848706 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.848714 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.848722 | orchestrator | 2026-01-28 00:52:43.848730 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-01-28 00:52:43.848738 | orchestrator | Wednesday 28 January 2026 00:50:26 +0000 (0:00:00.252) 0:02:09.454 ***** 2026-01-28 00:52:43.848745 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.848753 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.848761 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.848769 | orchestrator | 2026-01-28 00:52:43.848776 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-01-28 00:52:43.848784 | orchestrator | Wednesday 28 January 2026 00:50:26 +0000 (0:00:00.267) 0:02:09.721 ***** 2026-01-28 00:52:43.848792 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.848800 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.848807 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.848815 | orchestrator | 2026-01-28 00:52:43.848823 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-01-28 00:52:43.848831 | orchestrator | Wednesday 28 January 2026 00:50:27 +0000 (0:00:00.707) 0:02:10.428 ***** 2026-01-28 00:52:43.848839 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.848846 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.848854 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.848862 | orchestrator | 2026-01-28 00:52:43.848870 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-01-28 00:52:43.848878 | orchestrator | Wednesday 28 January 2026 00:50:27 +0000 (0:00:00.571) 0:02:11.000 ***** 2026-01-28 00:52:43.848885 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-28 00:52:43.848897 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-28 00:52:43.848909 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-01-28 00:52:43.848917 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-28 00:52:43.848925 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-28 00:52:43.848933 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-01-28 00:52:43.848941 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-28 00:52:43.848949 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-28 00:52:43.848956 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-01-28 00:52:43.848964 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-01-28 00:52:43.848972 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-28 00:52:43.848980 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-28 00:52:43.848987 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-01-28 00:52:43.848995 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-28 00:52:43.849008 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-28 00:52:43.849016 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-01-28 00:52:43.849024 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-28 00:52:43.849032 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-28 00:52:43.849039 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-01-28 00:52:43.849047 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-01-28 00:52:43.849055 | orchestrator | 2026-01-28 00:52:43.849089 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-01-28 00:52:43.849097 | orchestrator | 2026-01-28 00:52:43.849105 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-01-28 00:52:43.849113 | orchestrator | Wednesday 28 January 2026 00:50:30 +0000 (0:00:02.700) 0:02:13.700 ***** 2026-01-28 00:52:43.849121 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:52:43.849128 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:52:43.849136 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:52:43.849144 | orchestrator | 2026-01-28 00:52:43.849152 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-01-28 00:52:43.849160 | orchestrator | Wednesday 28 January 2026 00:50:31 +0000 (0:00:00.469) 0:02:14.170 ***** 2026-01-28 00:52:43.849167 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:52:43.849175 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:52:43.849183 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:52:43.849191 | orchestrator | 2026-01-28 00:52:43.849199 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-01-28 00:52:43.849206 | orchestrator | Wednesday 28 January 2026 00:50:31 +0000 (0:00:00.529) 0:02:14.700 ***** 2026-01-28 00:52:43.849214 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:52:43.849222 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:52:43.849230 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:52:43.849237 | orchestrator | 2026-01-28 00:52:43.849245 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-01-28 00:52:43.849253 | orchestrator | Wednesday 28 January 2026 00:50:31 +0000 (0:00:00.304) 0:02:15.004 ***** 2026-01-28 00:52:43.849261 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:52:43.849269 | orchestrator | 2026-01-28 00:52:43.849277 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-01-28 00:52:43.849284 | orchestrator | Wednesday 28 January 2026 00:50:32 +0000 (0:00:00.569) 0:02:15.573 ***** 2026-01-28 00:52:43.849292 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:52:43.849300 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:52:43.849308 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:52:43.849316 | orchestrator | 2026-01-28 00:52:43.849324 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-01-28 00:52:43.849331 | orchestrator | Wednesday 28 January 2026 00:50:32 +0000 (0:00:00.280) 0:02:15.853 ***** 2026-01-28 00:52:43.849339 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:52:43.849347 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:52:43.849355 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:52:43.849362 | orchestrator | 2026-01-28 00:52:43.849370 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-01-28 00:52:43.849378 | orchestrator | Wednesday 28 January 2026 00:50:33 +0000 (0:00:00.262) 0:02:16.116 ***** 2026-01-28 00:52:43.849386 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:52:43.849393 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:52:43.849401 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:52:43.849409 | orchestrator | 2026-01-28 00:52:43.849417 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-01-28 00:52:43.849430 | orchestrator | Wednesday 28 January 2026 00:50:33 +0000 (0:00:00.300) 0:02:16.416 ***** 2026-01-28 00:52:43.849438 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:52:43.849446 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:52:43.849453 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:52:43.849461 | orchestrator | 2026-01-28 00:52:43.849474 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-01-28 00:52:43.849482 | orchestrator | Wednesday 28 January 2026 00:50:33 +0000 (0:00:00.679) 0:02:17.096 ***** 2026-01-28 00:52:43.849490 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:52:43.849498 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:52:43.849506 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:52:43.849513 | orchestrator | 2026-01-28 00:52:43.849521 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-01-28 00:52:43.849529 | orchestrator | Wednesday 28 January 2026 00:50:34 +0000 (0:00:00.942) 0:02:18.039 ***** 2026-01-28 00:52:43.849537 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:52:43.849545 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:52:43.849552 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:52:43.849560 | orchestrator | 2026-01-28 00:52:43.849568 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-01-28 00:52:43.849576 | orchestrator | Wednesday 28 January 2026 00:50:36 +0000 (0:00:01.285) 0:02:19.325 ***** 2026-01-28 00:52:43.849583 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:52:43.849591 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:52:43.849599 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:52:43.849607 | orchestrator | 2026-01-28 00:52:43.849614 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-28 00:52:43.849622 | orchestrator | 2026-01-28 00:52:43.849630 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-28 00:52:43.849638 | orchestrator | Wednesday 28 January 2026 00:50:47 +0000 (0:00:11.341) 0:02:30.666 ***** 2026-01-28 00:52:43.849646 | orchestrator | ok: [testbed-manager] 2026-01-28 00:52:43.849653 | orchestrator | 2026-01-28 00:52:43.849661 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-28 00:52:43.849669 | orchestrator | Wednesday 28 January 2026 00:50:48 +0000 (0:00:00.794) 0:02:31.461 ***** 2026-01-28 00:52:43.849677 | orchestrator | changed: [testbed-manager] 2026-01-28 00:52:43.849684 | orchestrator | 2026-01-28 00:52:43.849692 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-28 00:52:43.849700 | orchestrator | Wednesday 28 January 2026 00:50:48 +0000 (0:00:00.425) 0:02:31.886 ***** 2026-01-28 00:52:43.849708 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-28 00:52:43.849715 | orchestrator | 2026-01-28 00:52:43.849723 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-28 00:52:43.849731 | orchestrator | Wednesday 28 January 2026 00:50:49 +0000 (0:00:00.584) 0:02:32.471 ***** 2026-01-28 00:52:43.849739 | orchestrator | changed: [testbed-manager] 2026-01-28 00:52:43.849747 | orchestrator | 2026-01-28 00:52:43.849754 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-28 00:52:43.849762 | orchestrator | Wednesday 28 January 2026 00:50:50 +0000 (0:00:00.728) 0:02:33.200 ***** 2026-01-28 00:52:43.849770 | orchestrator | changed: [testbed-manager] 2026-01-28 00:52:43.849778 | orchestrator | 2026-01-28 00:52:43.849785 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-28 00:52:43.849793 | orchestrator | Wednesday 28 January 2026 00:50:50 +0000 (0:00:00.515) 0:02:33.715 ***** 2026-01-28 00:52:43.849801 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-28 00:52:43.849809 | orchestrator | 2026-01-28 00:52:43.849817 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-28 00:52:43.849824 | orchestrator | Wednesday 28 January 2026 00:50:52 +0000 (0:00:01.500) 0:02:35.216 ***** 2026-01-28 00:52:43.849832 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-28 00:52:43.849846 | orchestrator | 2026-01-28 00:52:43.849854 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-28 00:52:43.849861 | orchestrator | Wednesday 28 January 2026 00:50:53 +0000 (0:00:00.883) 0:02:36.100 ***** 2026-01-28 00:52:43.849869 | orchestrator | changed: [testbed-manager] 2026-01-28 00:52:43.849877 | orchestrator | 2026-01-28 00:52:43.849885 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-28 00:52:43.849892 | orchestrator | Wednesday 28 January 2026 00:50:53 +0000 (0:00:00.513) 0:02:36.613 ***** 2026-01-28 00:52:43.849900 | orchestrator | changed: [testbed-manager] 2026-01-28 00:52:43.849908 | orchestrator | 2026-01-28 00:52:43.849916 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-01-28 00:52:43.849923 | orchestrator | 2026-01-28 00:52:43.849931 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-01-28 00:52:43.849939 | orchestrator | Wednesday 28 January 2026 00:50:54 +0000 (0:00:00.618) 0:02:37.231 ***** 2026-01-28 00:52:43.849947 | orchestrator | ok: [testbed-manager] 2026-01-28 00:52:43.849954 | orchestrator | 2026-01-28 00:52:43.849962 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-01-28 00:52:43.849970 | orchestrator | Wednesday 28 January 2026 00:50:54 +0000 (0:00:00.121) 0:02:37.353 ***** 2026-01-28 00:52:43.849978 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-01-28 00:52:43.849985 | orchestrator | 2026-01-28 00:52:43.849993 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-01-28 00:52:43.850001 | orchestrator | Wednesday 28 January 2026 00:50:54 +0000 (0:00:00.208) 0:02:37.561 ***** 2026-01-28 00:52:43.850008 | orchestrator | ok: [testbed-manager] 2026-01-28 00:52:43.850042 | orchestrator | 2026-01-28 00:52:43.850052 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-01-28 00:52:43.850086 | orchestrator | Wednesday 28 January 2026 00:50:55 +0000 (0:00:00.848) 0:02:38.409 ***** 2026-01-28 00:52:43.850101 | orchestrator | ok: [testbed-manager] 2026-01-28 00:52:43.850114 | orchestrator | 2026-01-28 00:52:43.850678 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-01-28 00:52:43.850694 | orchestrator | Wednesday 28 January 2026 00:50:56 +0000 (0:00:01.331) 0:02:39.740 ***** 2026-01-28 00:52:43.850703 | orchestrator | changed: [testbed-manager] 2026-01-28 00:52:43.850712 | orchestrator | 2026-01-28 00:52:43.850879 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-01-28 00:52:43.850889 | orchestrator | Wednesday 28 January 2026 00:50:57 +0000 (0:00:00.761) 0:02:40.502 ***** 2026-01-28 00:52:43.850897 | orchestrator | ok: [testbed-manager] 2026-01-28 00:52:43.850906 | orchestrator | 2026-01-28 00:52:43.850925 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-01-28 00:52:43.850934 | orchestrator | Wednesday 28 January 2026 00:50:57 +0000 (0:00:00.445) 0:02:40.947 ***** 2026-01-28 00:52:43.850943 | orchestrator | changed: [testbed-manager] 2026-01-28 00:52:43.850952 | orchestrator | 2026-01-28 00:52:43.850960 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-01-28 00:52:43.850978 | orchestrator | Wednesday 28 January 2026 00:51:07 +0000 (0:00:09.320) 0:02:50.268 ***** 2026-01-28 00:52:43.850987 | orchestrator | changed: [testbed-manager] 2026-01-28 00:52:43.850995 | orchestrator | 2026-01-28 00:52:43.851004 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-01-28 00:52:43.851013 | orchestrator | Wednesday 28 January 2026 00:51:21 +0000 (0:00:13.999) 0:03:04.267 ***** 2026-01-28 00:52:43.851022 | orchestrator | ok: [testbed-manager] 2026-01-28 00:52:43.851031 | orchestrator | 2026-01-28 00:52:43.851039 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-01-28 00:52:43.851048 | orchestrator | 2026-01-28 00:52:43.851057 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-01-28 00:52:43.851123 | orchestrator | Wednesday 28 January 2026 00:51:21 +0000 (0:00:00.470) 0:03:04.738 ***** 2026-01-28 00:52:43.851155 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.851173 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.851181 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.851190 | orchestrator | 2026-01-28 00:52:43.851199 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-01-28 00:52:43.851207 | orchestrator | Wednesday 28 January 2026 00:51:21 +0000 (0:00:00.275) 0:03:05.014 ***** 2026-01-28 00:52:43.851216 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.851225 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.851233 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.851242 | orchestrator | 2026-01-28 00:52:43.851250 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-01-28 00:52:43.851259 | orchestrator | Wednesday 28 January 2026 00:51:22 +0000 (0:00:00.269) 0:03:05.283 ***** 2026-01-28 00:52:43.851268 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:52:43.851277 | orchestrator | 2026-01-28 00:52:43.851285 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-01-28 00:52:43.851294 | orchestrator | Wednesday 28 January 2026 00:51:22 +0000 (0:00:00.661) 0:03:05.944 ***** 2026-01-28 00:52:43.851303 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-28 00:52:43.851311 | orchestrator | 2026-01-28 00:52:43.851320 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-01-28 00:52:43.851328 | orchestrator | Wednesday 28 January 2026 00:51:23 +0000 (0:00:00.792) 0:03:06.737 ***** 2026-01-28 00:52:43.851337 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-28 00:52:43.851346 | orchestrator | 2026-01-28 00:52:43.851354 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-01-28 00:52:43.851363 | orchestrator | Wednesday 28 January 2026 00:51:24 +0000 (0:00:00.873) 0:03:07.610 ***** 2026-01-28 00:52:43.851372 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.851380 | orchestrator | 2026-01-28 00:52:43.851389 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-01-28 00:52:43.851397 | orchestrator | Wednesday 28 January 2026 00:51:24 +0000 (0:00:00.106) 0:03:07.716 ***** 2026-01-28 00:52:43.851406 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-28 00:52:43.851415 | orchestrator | 2026-01-28 00:52:43.851424 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-01-28 00:52:43.851432 | orchestrator | Wednesday 28 January 2026 00:51:25 +0000 (0:00:00.871) 0:03:08.588 ***** 2026-01-28 00:52:43.851441 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.851450 | orchestrator | 2026-01-28 00:52:43.851458 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-01-28 00:52:43.851467 | orchestrator | Wednesday 28 January 2026 00:51:25 +0000 (0:00:00.134) 0:03:08.722 ***** 2026-01-28 00:52:43.851476 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.851484 | orchestrator | 2026-01-28 00:52:43.851493 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-01-28 00:52:43.851502 | orchestrator | Wednesday 28 January 2026 00:51:25 +0000 (0:00:00.150) 0:03:08.873 ***** 2026-01-28 00:52:43.851510 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.851519 | orchestrator | 2026-01-28 00:52:43.851527 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-01-28 00:52:43.851536 | orchestrator | Wednesday 28 January 2026 00:51:25 +0000 (0:00:00.106) 0:03:08.980 ***** 2026-01-28 00:52:43.851545 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.851553 | orchestrator | 2026-01-28 00:52:43.851562 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-01-28 00:52:43.851570 | orchestrator | Wednesday 28 January 2026 00:51:25 +0000 (0:00:00.117) 0:03:09.097 ***** 2026-01-28 00:52:43.851579 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-28 00:52:43.851588 | orchestrator | 2026-01-28 00:52:43.851597 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-01-28 00:52:43.851611 | orchestrator | Wednesday 28 January 2026 00:51:31 +0000 (0:00:05.078) 0:03:14.176 ***** 2026-01-28 00:52:43.851619 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-01-28 00:52:43.851628 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-01-28 00:52:43.851637 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-01-28 00:52:43.851646 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-01-28 00:52:43.851655 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-01-28 00:52:43.851663 | orchestrator | 2026-01-28 00:52:43.851672 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-01-28 00:52:43.851680 | orchestrator | Wednesday 28 January 2026 00:52:13 +0000 (0:00:42.822) 0:03:56.999 ***** 2026-01-28 00:52:43.851695 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-28 00:52:43.851704 | orchestrator | 2026-01-28 00:52:43.851713 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-01-28 00:52:43.851721 | orchestrator | Wednesday 28 January 2026 00:52:15 +0000 (0:00:01.268) 0:03:58.267 ***** 2026-01-28 00:52:43.851730 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-28 00:52:43.851739 | orchestrator | 2026-01-28 00:52:43.851752 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-01-28 00:52:43.851761 | orchestrator | Wednesday 28 January 2026 00:52:16 +0000 (0:00:01.710) 0:03:59.978 ***** 2026-01-28 00:52:43.851770 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-28 00:52:43.851778 | orchestrator | 2026-01-28 00:52:43.851787 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-01-28 00:52:43.851796 | orchestrator | Wednesday 28 January 2026 00:52:18 +0000 (0:00:01.123) 0:04:01.102 ***** 2026-01-28 00:52:43.851804 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.851813 | orchestrator | 2026-01-28 00:52:43.851822 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-01-28 00:52:43.851830 | orchestrator | Wednesday 28 January 2026 00:52:18 +0000 (0:00:00.144) 0:04:01.246 ***** 2026-01-28 00:52:43.851839 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-01-28 00:52:43.851847 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-01-28 00:52:43.851856 | orchestrator | 2026-01-28 00:52:43.851865 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-01-28 00:52:43.851873 | orchestrator | Wednesday 28 January 2026 00:52:19 +0000 (0:00:01.661) 0:04:02.908 ***** 2026-01-28 00:52:43.851882 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.851891 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.851899 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.851908 | orchestrator | 2026-01-28 00:52:43.851916 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-01-28 00:52:43.851925 | orchestrator | Wednesday 28 January 2026 00:52:20 +0000 (0:00:00.265) 0:04:03.173 ***** 2026-01-28 00:52:43.851934 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.851942 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.851951 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.851959 | orchestrator | 2026-01-28 00:52:43.851968 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-01-28 00:52:43.851977 | orchestrator | 2026-01-28 00:52:43.851985 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-01-28 00:52:43.851994 | orchestrator | Wednesday 28 January 2026 00:52:21 +0000 (0:00:01.202) 0:04:04.376 ***** 2026-01-28 00:52:43.852002 | orchestrator | ok: [testbed-manager] 2026-01-28 00:52:43.852011 | orchestrator | 2026-01-28 00:52:43.852019 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-01-28 00:52:43.852028 | orchestrator | Wednesday 28 January 2026 00:52:21 +0000 (0:00:00.135) 0:04:04.511 ***** 2026-01-28 00:52:43.852037 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-01-28 00:52:43.852050 | orchestrator | 2026-01-28 00:52:43.852075 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-01-28 00:52:43.852091 | orchestrator | Wednesday 28 January 2026 00:52:21 +0000 (0:00:00.206) 0:04:04.718 ***** 2026-01-28 00:52:43.852102 | orchestrator | changed: [testbed-manager] 2026-01-28 00:52:43.852110 | orchestrator | 2026-01-28 00:52:43.852119 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-01-28 00:52:43.852127 | orchestrator | 2026-01-28 00:52:43.852136 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-01-28 00:52:43.852145 | orchestrator | Wednesday 28 January 2026 00:52:27 +0000 (0:00:06.036) 0:04:10.755 ***** 2026-01-28 00:52:43.852153 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:52:43.852162 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:52:43.852170 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:52:43.852179 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:52:43.852188 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:52:43.852196 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:52:43.852205 | orchestrator | 2026-01-28 00:52:43.852213 | orchestrator | TASK [Manage labels] *********************************************************** 2026-01-28 00:52:43.852222 | orchestrator | Wednesday 28 January 2026 00:52:28 +0000 (0:00:00.782) 0:04:11.538 ***** 2026-01-28 00:52:43.852230 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-28 00:52:43.852239 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-28 00:52:43.852248 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-28 00:52:43.852256 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-01-28 00:52:43.852265 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-28 00:52:43.852273 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-01-28 00:52:43.852282 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-28 00:52:43.852290 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-28 00:52:43.852299 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-01-28 00:52:43.852307 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-28 00:52:43.852316 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-28 00:52:43.852324 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-28 00:52:43.852339 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-01-28 00:52:43.852348 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-28 00:52:43.852356 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-01-28 00:52:43.852365 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-28 00:52:43.852378 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-28 00:52:43.852387 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-01-28 00:52:43.852395 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-28 00:52:43.852404 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-28 00:52:43.852412 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-01-28 00:52:43.852421 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-28 00:52:43.852430 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-28 00:52:43.852444 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-01-28 00:52:43.852452 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-28 00:52:43.852461 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-28 00:52:43.852470 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-01-28 00:52:43.852478 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-28 00:52:43.852487 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-28 00:52:43.852496 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-01-28 00:52:43.852504 | orchestrator | 2026-01-28 00:52:43.852513 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-01-28 00:52:43.852522 | orchestrator | Wednesday 28 January 2026 00:52:40 +0000 (0:00:12.417) 0:04:23.955 ***** 2026-01-28 00:52:43.852530 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:52:43.852539 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:52:43.852548 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:52:43.852556 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.852565 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.852573 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.852582 | orchestrator | 2026-01-28 00:52:43.852591 | orchestrator | TASK [Manage taints] *********************************************************** 2026-01-28 00:52:43.852599 | orchestrator | Wednesday 28 January 2026 00:52:41 +0000 (0:00:00.679) 0:04:24.635 ***** 2026-01-28 00:52:43.852608 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:52:43.852616 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:52:43.852625 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:52:43.852634 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:52:43.852642 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:52:43.852651 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:52:43.852659 | orchestrator | 2026-01-28 00:52:43.852668 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:52:43.852677 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:52:43.852687 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-28 00:52:43.852696 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-28 00:52:43.852705 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-28 00:52:43.852713 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-28 00:52:43.852722 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-28 00:52:43.852731 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-28 00:52:43.852739 | orchestrator | 2026-01-28 00:52:43.852748 | orchestrator | 2026-01-28 00:52:43.852757 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:52:43.852765 | orchestrator | Wednesday 28 January 2026 00:52:42 +0000 (0:00:00.662) 0:04:25.297 ***** 2026-01-28 00:52:43.852774 | orchestrator | =============================================================================== 2026-01-28 00:52:43.852789 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.16s 2026-01-28 00:52:43.852798 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.82s 2026-01-28 00:52:43.852807 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.92s 2026-01-28 00:52:43.852820 | orchestrator | kubectl : Install required packages ------------------------------------ 14.00s 2026-01-28 00:52:43.852829 | orchestrator | Manage labels ---------------------------------------------------------- 12.42s 2026-01-28 00:52:43.852837 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.34s 2026-01-28 00:52:43.852846 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 9.32s 2026-01-28 00:52:43.852859 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.04s 2026-01-28 00:52:43.852868 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.26s 2026-01-28 00:52:43.852876 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.08s 2026-01-28 00:52:43.852885 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.99s 2026-01-28 00:52:43.852893 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.81s 2026-01-28 00:52:43.852902 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.70s 2026-01-28 00:52:43.852911 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.70s 2026-01-28 00:52:43.852919 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.16s 2026-01-28 00:52:43.852928 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.06s 2026-01-28 00:52:43.852936 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 1.97s 2026-01-28 00:52:43.852945 | orchestrator | k3s_server : Copy K3s service file -------------------------------------- 1.87s 2026-01-28 00:52:43.852953 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.79s 2026-01-28 00:52:43.852962 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 1.76s 2026-01-28 00:52:43.852971 | orchestrator | 2026-01-28 00:52:43 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:43.852980 | orchestrator | 2026-01-28 00:52:43 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:43.852989 | orchestrator | 2026-01-28 00:52:43 | INFO  | Task 3b7fb2a3-90cd-4d3b-9ea4-51b2c53abd08 is in state STARTED 2026-01-28 00:52:43.852997 | orchestrator | 2026-01-28 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:46.882119 | orchestrator | 2026-01-28 00:52:46 | INFO  | Task d7e050b8-07ad-4632-8042-aaf5aa5a56d7 is in state STARTED 2026-01-28 00:52:46.882189 | orchestrator | 2026-01-28 00:52:46 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:46.882195 | orchestrator | 2026-01-28 00:52:46 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:46.882200 | orchestrator | 2026-01-28 00:52:46 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:46.883574 | orchestrator | 2026-01-28 00:52:46 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:46.884250 | orchestrator | 2026-01-28 00:52:46 | INFO  | Task 3b7fb2a3-90cd-4d3b-9ea4-51b2c53abd08 is in state STARTED 2026-01-28 00:52:46.884263 | orchestrator | 2026-01-28 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:49.932653 | orchestrator | 2026-01-28 00:52:49 | INFO  | Task d7e050b8-07ad-4632-8042-aaf5aa5a56d7 is in state STARTED 2026-01-28 00:52:49.933046 | orchestrator | 2026-01-28 00:52:49 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:49.933622 | orchestrator | 2026-01-28 00:52:49 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:49.934290 | orchestrator | 2026-01-28 00:52:49 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:49.936776 | orchestrator | 2026-01-28 00:52:49 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:49.938538 | orchestrator | 2026-01-28 00:52:49 | INFO  | Task 3b7fb2a3-90cd-4d3b-9ea4-51b2c53abd08 is in state SUCCESS 2026-01-28 00:52:49.938561 | orchestrator | 2026-01-28 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:52.964884 | orchestrator | 2026-01-28 00:52:52 | INFO  | Task d7e050b8-07ad-4632-8042-aaf5aa5a56d7 is in state STARTED 2026-01-28 00:52:52.965001 | orchestrator | 2026-01-28 00:52:52 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:52.965021 | orchestrator | 2026-01-28 00:52:52 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:52.965577 | orchestrator | 2026-01-28 00:52:52 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:52.966234 | orchestrator | 2026-01-28 00:52:52 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:52.966256 | orchestrator | 2026-01-28 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:56.008119 | orchestrator | 2026-01-28 00:52:56 | INFO  | Task d7e050b8-07ad-4632-8042-aaf5aa5a56d7 is in state SUCCESS 2026-01-28 00:52:56.008747 | orchestrator | 2026-01-28 00:52:56 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:56.009571 | orchestrator | 2026-01-28 00:52:56 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:56.011034 | orchestrator | 2026-01-28 00:52:56 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:56.011864 | orchestrator | 2026-01-28 00:52:56 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:56.011908 | orchestrator | 2026-01-28 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:52:59.047522 | orchestrator | 2026-01-28 00:52:59 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:52:59.048901 | orchestrator | 2026-01-28 00:52:59 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:52:59.050418 | orchestrator | 2026-01-28 00:52:59 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:52:59.051628 | orchestrator | 2026-01-28 00:52:59 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:52:59.052152 | orchestrator | 2026-01-28 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:02.082703 | orchestrator | 2026-01-28 00:53:02 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:02.084567 | orchestrator | 2026-01-28 00:53:02 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:02.086311 | orchestrator | 2026-01-28 00:53:02 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:02.088549 | orchestrator | 2026-01-28 00:53:02 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:53:02.088745 | orchestrator | 2026-01-28 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:05.127269 | orchestrator | 2026-01-28 00:53:05 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:05.129349 | orchestrator | 2026-01-28 00:53:05 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:05.132367 | orchestrator | 2026-01-28 00:53:05 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:05.134829 | orchestrator | 2026-01-28 00:53:05 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:53:05.135240 | orchestrator | 2026-01-28 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:08.169795 | orchestrator | 2026-01-28 00:53:08 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:08.171644 | orchestrator | 2026-01-28 00:53:08 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:08.173311 | orchestrator | 2026-01-28 00:53:08 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:08.175257 | orchestrator | 2026-01-28 00:53:08 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state STARTED 2026-01-28 00:53:08.175287 | orchestrator | 2026-01-28 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:11.204574 | orchestrator | 2026-01-28 00:53:11 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:11.206119 | orchestrator | 2026-01-28 00:53:11 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:11.206782 | orchestrator | 2026-01-28 00:53:11 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:11.207941 | orchestrator | 2026-01-28 00:53:11 | INFO  | Task 423249d0-80fc-4f5c-b836-20eb6e00962a is in state SUCCESS 2026-01-28 00:53:11.209091 | orchestrator | 2026-01-28 00:53:11.209150 | orchestrator | 2026-01-28 00:53:11.209173 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-01-28 00:53:11.209195 | orchestrator | 2026-01-28 00:53:11.209216 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-28 00:53:11.209237 | orchestrator | Wednesday 28 January 2026 00:52:46 +0000 (0:00:00.155) 0:00:00.155 ***** 2026-01-28 00:53:11.209249 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-28 00:53:11.209260 | orchestrator | 2026-01-28 00:53:11.209271 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-28 00:53:11.209281 | orchestrator | Wednesday 28 January 2026 00:52:47 +0000 (0:00:00.740) 0:00:00.896 ***** 2026-01-28 00:53:11.209293 | orchestrator | changed: [testbed-manager] 2026-01-28 00:53:11.209304 | orchestrator | 2026-01-28 00:53:11.209315 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-01-28 00:53:11.209326 | orchestrator | Wednesday 28 January 2026 00:52:48 +0000 (0:00:01.000) 0:00:01.897 ***** 2026-01-28 00:53:11.209337 | orchestrator | changed: [testbed-manager] 2026-01-28 00:53:11.209348 | orchestrator | 2026-01-28 00:53:11.209359 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:53:11.209388 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:53:11.209401 | orchestrator | 2026-01-28 00:53:11.209412 | orchestrator | 2026-01-28 00:53:11.209423 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:53:11.209434 | orchestrator | Wednesday 28 January 2026 00:52:48 +0000 (0:00:00.498) 0:00:02.395 ***** 2026-01-28 00:53:11.209445 | orchestrator | =============================================================================== 2026-01-28 00:53:11.209456 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.00s 2026-01-28 00:53:11.209467 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.74s 2026-01-28 00:53:11.209478 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.50s 2026-01-28 00:53:11.209513 | orchestrator | 2026-01-28 00:53:11.209525 | orchestrator | 2026-01-28 00:53:11.209536 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-01-28 00:53:11.209547 | orchestrator | 2026-01-28 00:53:11.209557 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-01-28 00:53:11.209568 | orchestrator | Wednesday 28 January 2026 00:52:46 +0000 (0:00:00.145) 0:00:00.145 ***** 2026-01-28 00:53:11.209579 | orchestrator | ok: [testbed-manager] 2026-01-28 00:53:11.209591 | orchestrator | 2026-01-28 00:53:11.209601 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-01-28 00:53:11.209612 | orchestrator | Wednesday 28 January 2026 00:52:46 +0000 (0:00:00.522) 0:00:00.668 ***** 2026-01-28 00:53:11.209624 | orchestrator | ok: [testbed-manager] 2026-01-28 00:53:11.209637 | orchestrator | 2026-01-28 00:53:11.209650 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-01-28 00:53:11.209661 | orchestrator | Wednesday 28 January 2026 00:52:47 +0000 (0:00:00.528) 0:00:01.196 ***** 2026-01-28 00:53:11.209674 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-01-28 00:53:11.209686 | orchestrator | 2026-01-28 00:53:11.209699 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-01-28 00:53:11.209710 | orchestrator | Wednesday 28 January 2026 00:52:48 +0000 (0:00:00.652) 0:00:01.848 ***** 2026-01-28 00:53:11.209723 | orchestrator | changed: [testbed-manager] 2026-01-28 00:53:11.209734 | orchestrator | 2026-01-28 00:53:11.209747 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-01-28 00:53:11.209759 | orchestrator | Wednesday 28 January 2026 00:52:49 +0000 (0:00:01.420) 0:00:03.269 ***** 2026-01-28 00:53:11.209771 | orchestrator | changed: [testbed-manager] 2026-01-28 00:53:11.209783 | orchestrator | 2026-01-28 00:53:11.209795 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-01-28 00:53:11.209808 | orchestrator | Wednesday 28 January 2026 00:52:50 +0000 (0:00:00.471) 0:00:03.740 ***** 2026-01-28 00:53:11.209821 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-28 00:53:11.209833 | orchestrator | 2026-01-28 00:53:11.209845 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-01-28 00:53:11.209857 | orchestrator | Wednesday 28 January 2026 00:52:51 +0000 (0:00:01.372) 0:00:05.113 ***** 2026-01-28 00:53:11.209869 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-28 00:53:11.209882 | orchestrator | 2026-01-28 00:53:11.209894 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-01-28 00:53:11.209907 | orchestrator | Wednesday 28 January 2026 00:52:52 +0000 (0:00:00.766) 0:00:05.879 ***** 2026-01-28 00:53:11.209919 | orchestrator | ok: [testbed-manager] 2026-01-28 00:53:11.209932 | orchestrator | 2026-01-28 00:53:11.209945 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-01-28 00:53:11.209956 | orchestrator | Wednesday 28 January 2026 00:52:52 +0000 (0:00:00.506) 0:00:06.385 ***** 2026-01-28 00:53:11.209967 | orchestrator | ok: [testbed-manager] 2026-01-28 00:53:11.209978 | orchestrator | 2026-01-28 00:53:11.209988 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:53:11.209999 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:53:11.210010 | orchestrator | 2026-01-28 00:53:11.210106 | orchestrator | 2026-01-28 00:53:11.210119 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:53:11.210130 | orchestrator | Wednesday 28 January 2026 00:52:52 +0000 (0:00:00.295) 0:00:06.681 ***** 2026-01-28 00:53:11.210140 | orchestrator | =============================================================================== 2026-01-28 00:53:11.210151 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.42s 2026-01-28 00:53:11.210162 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.37s 2026-01-28 00:53:11.210173 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.77s 2026-01-28 00:53:11.210209 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.65s 2026-01-28 00:53:11.210221 | orchestrator | Create .kube directory -------------------------------------------------- 0.53s 2026-01-28 00:53:11.210232 | orchestrator | Get home directory of operator user ------------------------------------- 0.52s 2026-01-28 00:53:11.210243 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.51s 2026-01-28 00:53:11.210254 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.47s 2026-01-28 00:53:11.210265 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.30s 2026-01-28 00:53:11.210275 | orchestrator | 2026-01-28 00:53:11.210286 | orchestrator | 2026-01-28 00:53:11.210297 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-01-28 00:53:11.210307 | orchestrator | 2026-01-28 00:53:11.210318 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-28 00:53:11.210329 | orchestrator | Wednesday 28 January 2026 00:50:53 +0000 (0:00:00.281) 0:00:00.281 ***** 2026-01-28 00:53:11.210340 | orchestrator | ok: [localhost] => { 2026-01-28 00:53:11.210358 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-01-28 00:53:11.210370 | orchestrator | } 2026-01-28 00:53:11.210381 | orchestrator | 2026-01-28 00:53:11.210392 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-01-28 00:53:11.210403 | orchestrator | Wednesday 28 January 2026 00:50:53 +0000 (0:00:00.076) 0:00:00.358 ***** 2026-01-28 00:53:11.210415 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-01-28 00:53:11.210427 | orchestrator | ...ignoring 2026-01-28 00:53:11.210439 | orchestrator | 2026-01-28 00:53:11.210450 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-01-28 00:53:11.210461 | orchestrator | Wednesday 28 January 2026 00:50:56 +0000 (0:00:03.022) 0:00:03.381 ***** 2026-01-28 00:53:11.210471 | orchestrator | skipping: [localhost] 2026-01-28 00:53:11.210482 | orchestrator | 2026-01-28 00:53:11.210493 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-01-28 00:53:11.210504 | orchestrator | Wednesday 28 January 2026 00:50:56 +0000 (0:00:00.066) 0:00:03.447 ***** 2026-01-28 00:53:11.210514 | orchestrator | ok: [localhost] 2026-01-28 00:53:11.210525 | orchestrator | 2026-01-28 00:53:11.210536 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 00:53:11.210546 | orchestrator | 2026-01-28 00:53:11.210557 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 00:53:11.210568 | orchestrator | Wednesday 28 January 2026 00:50:57 +0000 (0:00:00.236) 0:00:03.684 ***** 2026-01-28 00:53:11.210579 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:53:11.210589 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:53:11.210600 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:53:11.210611 | orchestrator | 2026-01-28 00:53:11.210622 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 00:53:11.210632 | orchestrator | Wednesday 28 January 2026 00:50:57 +0000 (0:00:00.544) 0:00:04.228 ***** 2026-01-28 00:53:11.210643 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-01-28 00:53:11.210654 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-01-28 00:53:11.210665 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-01-28 00:53:11.210675 | orchestrator | 2026-01-28 00:53:11.210686 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-01-28 00:53:11.210697 | orchestrator | 2026-01-28 00:53:11.210707 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-28 00:53:11.210718 | orchestrator | Wednesday 28 January 2026 00:50:58 +0000 (0:00:00.551) 0:00:04.779 ***** 2026-01-28 00:53:11.210729 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:53:11.210746 | orchestrator | 2026-01-28 00:53:11.210757 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-28 00:53:11.210768 | orchestrator | Wednesday 28 January 2026 00:50:58 +0000 (0:00:00.531) 0:00:05.311 ***** 2026-01-28 00:53:11.210779 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:53:11.210789 | orchestrator | 2026-01-28 00:53:11.210800 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-01-28 00:53:11.210811 | orchestrator | Wednesday 28 January 2026 00:50:59 +0000 (0:00:01.345) 0:00:06.656 ***** 2026-01-28 00:53:11.210821 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:53:11.210832 | orchestrator | 2026-01-28 00:53:11.210843 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-01-28 00:53:11.210854 | orchestrator | Wednesday 28 January 2026 00:51:00 +0000 (0:00:00.603) 0:00:07.260 ***** 2026-01-28 00:53:11.210864 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:53:11.210875 | orchestrator | 2026-01-28 00:53:11.210886 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-01-28 00:53:11.210897 | orchestrator | Wednesday 28 January 2026 00:51:01 +0000 (0:00:00.529) 0:00:07.789 ***** 2026-01-28 00:53:11.210907 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:53:11.210918 | orchestrator | 2026-01-28 00:53:11.210928 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-01-28 00:53:11.210939 | orchestrator | Wednesday 28 January 2026 00:51:01 +0000 (0:00:00.537) 0:00:08.326 ***** 2026-01-28 00:53:11.210950 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:53:11.210961 | orchestrator | 2026-01-28 00:53:11.210972 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-28 00:53:11.210982 | orchestrator | Wednesday 28 January 2026 00:51:02 +0000 (0:00:00.928) 0:00:09.255 ***** 2026-01-28 00:53:11.210993 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:53:11.211004 | orchestrator | 2026-01-28 00:53:11.211015 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-01-28 00:53:11.211032 | orchestrator | Wednesday 28 January 2026 00:51:03 +0000 (0:00:01.038) 0:00:10.294 ***** 2026-01-28 00:53:11.211069 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:53:11.211081 | orchestrator | 2026-01-28 00:53:11.211092 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-01-28 00:53:11.211102 | orchestrator | Wednesday 28 January 2026 00:51:04 +0000 (0:00:01.343) 0:00:11.638 ***** 2026-01-28 00:53:11.211113 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:53:11.211124 | orchestrator | 2026-01-28 00:53:11.211135 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-01-28 00:53:11.211145 | orchestrator | Wednesday 28 January 2026 00:51:06 +0000 (0:00:01.177) 0:00:12.816 ***** 2026-01-28 00:53:11.211156 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:53:11.211167 | orchestrator | 2026-01-28 00:53:11.211177 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-01-28 00:53:11.211188 | orchestrator | Wednesday 28 January 2026 00:51:06 +0000 (0:00:00.462) 0:00:13.279 ***** 2026-01-28 00:53:11.211210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-28 00:53:11.211234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-28 00:53:11.211248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-28 00:53:11.211261 | orchestrator | 2026-01-28 00:53:11.211272 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-01-28 00:53:11.211283 | orchestrator | Wednesday 28 January 2026 00:51:07 +0000 (0:00:01.146) 0:00:14.425 ***** 2026-01-28 00:53:11.211309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-28 00:53:11.211322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-28 00:53:11.211342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-28 00:53:11.211354 | orchestrator | 2026-01-28 00:53:11.211364 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-01-28 00:53:11.211375 | orchestrator | Wednesday 28 January 2026 00:51:10 +0000 (0:00:02.694) 0:00:17.120 ***** 2026-01-28 00:53:11.211386 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-28 00:53:11.211397 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-28 00:53:11.211408 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-01-28 00:53:11.211419 | orchestrator | 2026-01-28 00:53:11.211430 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-01-28 00:53:11.211440 | orchestrator | Wednesday 28 January 2026 00:51:11 +0000 (0:00:01.512) 0:00:18.632 ***** 2026-01-28 00:53:11.211451 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-28 00:53:11.211462 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-28 00:53:11.211473 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-01-28 00:53:11.211484 | orchestrator | 2026-01-28 00:53:11.211500 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-01-28 00:53:11.211511 | orchestrator | Wednesday 28 January 2026 00:51:15 +0000 (0:00:03.077) 0:00:21.710 ***** 2026-01-28 00:53:11.211522 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-28 00:53:11.211533 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-28 00:53:11.211543 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-01-28 00:53:11.211554 | orchestrator | 2026-01-28 00:53:11.211565 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-01-28 00:53:11.211575 | orchestrator | Wednesday 28 January 2026 00:51:16 +0000 (0:00:01.645) 0:00:23.356 ***** 2026-01-28 00:53:11.211586 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-28 00:53:11.211604 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-28 00:53:11.211615 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-01-28 00:53:11.211626 | orchestrator | 2026-01-28 00:53:11.211641 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-01-28 00:53:11.211652 | orchestrator | Wednesday 28 January 2026 00:51:18 +0000 (0:00:02.071) 0:00:25.428 ***** 2026-01-28 00:53:11.211663 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-28 00:53:11.211674 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-28 00:53:11.211685 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-01-28 00:53:11.211695 | orchestrator | 2026-01-28 00:53:11.211706 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-01-28 00:53:11.211717 | orchestrator | Wednesday 28 January 2026 00:51:20 +0000 (0:00:01.880) 0:00:27.308 ***** 2026-01-28 00:53:11.211727 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-28 00:53:11.211738 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-28 00:53:11.211749 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-01-28 00:53:11.211759 | orchestrator | 2026-01-28 00:53:11.211770 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-01-28 00:53:11.211781 | orchestrator | Wednesday 28 January 2026 00:51:22 +0000 (0:00:01.757) 0:00:29.065 ***** 2026-01-28 00:53:11.211792 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:53:11.211802 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:53:11.211813 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:53:11.211824 | orchestrator | 2026-01-28 00:53:11.211835 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-01-28 00:53:11.211845 | orchestrator | Wednesday 28 January 2026 00:51:22 +0000 (0:00:00.605) 0:00:29.671 ***** 2026-01-28 00:53:11.211857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-28 00:53:11.211878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-28 00:53:11.211901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-28 00:53:11.211914 | orchestrator | 2026-01-28 00:53:11.211924 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-01-28 00:53:11.211935 | orchestrator | Wednesday 28 January 2026 00:51:24 +0000 (0:00:01.773) 0:00:31.445 ***** 2026-01-28 00:53:11.211946 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:53:11.211957 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:53:11.211967 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:53:11.211978 | orchestrator | 2026-01-28 00:53:11.211989 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-01-28 00:53:11.211999 | orchestrator | Wednesday 28 January 2026 00:51:25 +0000 (0:00:00.850) 0:00:32.295 ***** 2026-01-28 00:53:11.212010 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:53:11.212021 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:53:11.212032 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:53:11.212112 | orchestrator | 2026-01-28 00:53:11.212124 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-01-28 00:53:11.212135 | orchestrator | Wednesday 28 January 2026 00:51:32 +0000 (0:00:06.925) 0:00:39.221 ***** 2026-01-28 00:53:11.212146 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:53:11.212157 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:53:11.212167 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:53:11.212178 | orchestrator | 2026-01-28 00:53:11.212189 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-28 00:53:11.212200 | orchestrator | 2026-01-28 00:53:11.212211 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-28 00:53:11.212221 | orchestrator | Wednesday 28 January 2026 00:51:33 +0000 (0:00:00.540) 0:00:39.762 ***** 2026-01-28 00:53:11.212232 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:53:11.212243 | orchestrator | 2026-01-28 00:53:11.212253 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-28 00:53:11.212264 | orchestrator | Wednesday 28 January 2026 00:51:33 +0000 (0:00:00.621) 0:00:40.383 ***** 2026-01-28 00:53:11.212275 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:53:11.212286 | orchestrator | 2026-01-28 00:53:11.212297 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-28 00:53:11.212307 | orchestrator | Wednesday 28 January 2026 00:51:33 +0000 (0:00:00.235) 0:00:40.619 ***** 2026-01-28 00:53:11.212318 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:53:11.212329 | orchestrator | 2026-01-28 00:53:11.212340 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-28 00:53:11.212351 | orchestrator | Wednesday 28 January 2026 00:51:36 +0000 (0:00:02.083) 0:00:42.703 ***** 2026-01-28 00:53:11.212369 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:53:11.212380 | orchestrator | 2026-01-28 00:53:11.212391 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-28 00:53:11.212402 | orchestrator | 2026-01-28 00:53:11.212413 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-28 00:53:11.212424 | orchestrator | Wednesday 28 January 2026 00:52:30 +0000 (0:00:54.260) 0:01:36.964 ***** 2026-01-28 00:53:11.212434 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:53:11.212445 | orchestrator | 2026-01-28 00:53:11.212456 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-28 00:53:11.212467 | orchestrator | Wednesday 28 January 2026 00:52:31 +0000 (0:00:00.947) 0:01:37.911 ***** 2026-01-28 00:53:11.212477 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:53:11.212487 | orchestrator | 2026-01-28 00:53:11.212496 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-28 00:53:11.212506 | orchestrator | Wednesday 28 January 2026 00:52:31 +0000 (0:00:00.658) 0:01:38.569 ***** 2026-01-28 00:53:11.212515 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:53:11.212525 | orchestrator | 2026-01-28 00:53:11.212534 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-28 00:53:11.212544 | orchestrator | Wednesday 28 January 2026 00:52:34 +0000 (0:00:02.546) 0:01:41.116 ***** 2026-01-28 00:53:11.212553 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:53:11.212563 | orchestrator | 2026-01-28 00:53:11.212572 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-01-28 00:53:11.212582 | orchestrator | 2026-01-28 00:53:11.212592 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-01-28 00:53:11.212608 | orchestrator | Wednesday 28 January 2026 00:52:47 +0000 (0:00:13.161) 0:01:54.277 ***** 2026-01-28 00:53:11.212618 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:53:11.212627 | orchestrator | 2026-01-28 00:53:11.212637 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-01-28 00:53:11.212647 | orchestrator | Wednesday 28 January 2026 00:52:48 +0000 (0:00:00.732) 0:01:55.010 ***** 2026-01-28 00:53:11.212656 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:53:11.212666 | orchestrator | 2026-01-28 00:53:11.212675 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-01-28 00:53:11.212685 | orchestrator | Wednesday 28 January 2026 00:52:48 +0000 (0:00:00.506) 0:01:55.516 ***** 2026-01-28 00:53:11.212695 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:53:11.212704 | orchestrator | 2026-01-28 00:53:11.212714 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-01-28 00:53:11.212723 | orchestrator | Wednesday 28 January 2026 00:52:55 +0000 (0:00:06.855) 0:02:02.372 ***** 2026-01-28 00:53:11.212733 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:53:11.212742 | orchestrator | 2026-01-28 00:53:11.212752 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-01-28 00:53:11.212761 | orchestrator | 2026-01-28 00:53:11.212781 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-01-28 00:53:11.212791 | orchestrator | Wednesday 28 January 2026 00:53:06 +0000 (0:00:10.940) 0:02:13.313 ***** 2026-01-28 00:53:11.212800 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:53:11.212810 | orchestrator | 2026-01-28 00:53:11.212819 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-01-28 00:53:11.212829 | orchestrator | Wednesday 28 January 2026 00:53:07 +0000 (0:00:00.495) 0:02:13.808 ***** 2026-01-28 00:53:11.212838 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-28 00:53:11.212848 | orchestrator | enable_outward_rabbitmq_True 2026-01-28 00:53:11.212857 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-01-28 00:53:11.212867 | orchestrator | outward_rabbitmq_restart 2026-01-28 00:53:11.212876 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:53:11.212886 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:53:11.212901 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:53:11.212911 | orchestrator | 2026-01-28 00:53:11.212920 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-01-28 00:53:11.212930 | orchestrator | skipping: no hosts matched 2026-01-28 00:53:11.212939 | orchestrator | 2026-01-28 00:53:11.212949 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-01-28 00:53:11.212959 | orchestrator | skipping: no hosts matched 2026-01-28 00:53:11.212968 | orchestrator | 2026-01-28 00:53:11.212978 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-01-28 00:53:11.212987 | orchestrator | skipping: no hosts matched 2026-01-28 00:53:11.212997 | orchestrator | 2026-01-28 00:53:11.213006 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:53:11.213016 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-28 00:53:11.213026 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-28 00:53:11.213036 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:53:11.213068 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 00:53:11.213078 | orchestrator | 2026-01-28 00:53:11.213087 | orchestrator | 2026-01-28 00:53:11.213097 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:53:11.213106 | orchestrator | Wednesday 28 January 2026 00:53:09 +0000 (0:00:02.417) 0:02:16.225 ***** 2026-01-28 00:53:11.213116 | orchestrator | =============================================================================== 2026-01-28 00:53:11.213125 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 78.36s 2026-01-28 00:53:11.213134 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 11.49s 2026-01-28 00:53:11.213144 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.93s 2026-01-28 00:53:11.213153 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.08s 2026-01-28 00:53:11.213163 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.02s 2026-01-28 00:53:11.213172 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.69s 2026-01-28 00:53:11.213182 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.42s 2026-01-28 00:53:11.213191 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.30s 2026-01-28 00:53:11.213201 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.07s 2026-01-28 00:53:11.213210 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.88s 2026-01-28 00:53:11.213220 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.77s 2026-01-28 00:53:11.213229 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.76s 2026-01-28 00:53:11.213238 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.65s 2026-01-28 00:53:11.213248 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.51s 2026-01-28 00:53:11.213257 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.40s 2026-01-28 00:53:11.213272 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.35s 2026-01-28 00:53:11.213281 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.34s 2026-01-28 00:53:11.213291 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.18s 2026-01-28 00:53:11.213300 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.15s 2026-01-28 00:53:11.213310 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.04s 2026-01-28 00:53:11.213325 | orchestrator | 2026-01-28 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:14.255670 | orchestrator | 2026-01-28 00:53:14 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:14.258616 | orchestrator | 2026-01-28 00:53:14 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:14.260454 | orchestrator | 2026-01-28 00:53:14 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:14.260522 | orchestrator | 2026-01-28 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:17.298529 | orchestrator | 2026-01-28 00:53:17 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:17.300091 | orchestrator | 2026-01-28 00:53:17 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:17.301991 | orchestrator | 2026-01-28 00:53:17 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:17.302132 | orchestrator | 2026-01-28 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:20.345610 | orchestrator | 2026-01-28 00:53:20 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:20.346389 | orchestrator | 2026-01-28 00:53:20 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:20.348801 | orchestrator | 2026-01-28 00:53:20 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:20.348845 | orchestrator | 2026-01-28 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:23.391940 | orchestrator | 2026-01-28 00:53:23 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:23.396112 | orchestrator | 2026-01-28 00:53:23 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:23.398130 | orchestrator | 2026-01-28 00:53:23 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:23.398325 | orchestrator | 2026-01-28 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:26.450000 | orchestrator | 2026-01-28 00:53:26 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:26.451688 | orchestrator | 2026-01-28 00:53:26 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:26.453860 | orchestrator | 2026-01-28 00:53:26 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:26.453999 | orchestrator | 2026-01-28 00:53:26 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:29.500306 | orchestrator | 2026-01-28 00:53:29 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:29.500414 | orchestrator | 2026-01-28 00:53:29 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:29.501310 | orchestrator | 2026-01-28 00:53:29 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:29.501414 | orchestrator | 2026-01-28 00:53:29 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:32.538598 | orchestrator | 2026-01-28 00:53:32 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:32.542012 | orchestrator | 2026-01-28 00:53:32 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:32.543246 | orchestrator | 2026-01-28 00:53:32 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:32.543324 | orchestrator | 2026-01-28 00:53:32 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:35.578385 | orchestrator | 2026-01-28 00:53:35 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:35.578720 | orchestrator | 2026-01-28 00:53:35 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:35.579677 | orchestrator | 2026-01-28 00:53:35 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:35.579703 | orchestrator | 2026-01-28 00:53:35 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:38.618451 | orchestrator | 2026-01-28 00:53:38 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:38.622164 | orchestrator | 2026-01-28 00:53:38 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:38.624186 | orchestrator | 2026-01-28 00:53:38 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:38.624392 | orchestrator | 2026-01-28 00:53:38 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:41.664615 | orchestrator | 2026-01-28 00:53:41 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:41.667107 | orchestrator | 2026-01-28 00:53:41 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:41.667964 | orchestrator | 2026-01-28 00:53:41 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:41.667987 | orchestrator | 2026-01-28 00:53:41 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:44.716424 | orchestrator | 2026-01-28 00:53:44 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:44.716625 | orchestrator | 2026-01-28 00:53:44 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:44.717743 | orchestrator | 2026-01-28 00:53:44 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:44.717840 | orchestrator | 2026-01-28 00:53:44 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:47.756878 | orchestrator | 2026-01-28 00:53:47 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:47.759556 | orchestrator | 2026-01-28 00:53:47 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:47.761821 | orchestrator | 2026-01-28 00:53:47 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:47.761859 | orchestrator | 2026-01-28 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:50.809926 | orchestrator | 2026-01-28 00:53:50 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:50.811363 | orchestrator | 2026-01-28 00:53:50 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:50.814203 | orchestrator | 2026-01-28 00:53:50 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:50.814517 | orchestrator | 2026-01-28 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:53.869876 | orchestrator | 2026-01-28 00:53:53 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:53.871480 | orchestrator | 2026-01-28 00:53:53 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:53.873241 | orchestrator | 2026-01-28 00:53:53 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:53.873411 | orchestrator | 2026-01-28 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:56.912912 | orchestrator | 2026-01-28 00:53:56 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:56.915225 | orchestrator | 2026-01-28 00:53:56 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:56.917751 | orchestrator | 2026-01-28 00:53:56 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:56.917869 | orchestrator | 2026-01-28 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:53:59.948141 | orchestrator | 2026-01-28 00:53:59 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:53:59.950566 | orchestrator | 2026-01-28 00:53:59 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:53:59.951411 | orchestrator | 2026-01-28 00:53:59 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:53:59.951464 | orchestrator | 2026-01-28 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:03.037838 | orchestrator | 2026-01-28 00:54:03 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:03.038450 | orchestrator | 2026-01-28 00:54:03 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:03.039578 | orchestrator | 2026-01-28 00:54:03 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:54:03.039601 | orchestrator | 2026-01-28 00:54:03 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:06.077173 | orchestrator | 2026-01-28 00:54:06 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:06.077282 | orchestrator | 2026-01-28 00:54:06 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:06.077296 | orchestrator | 2026-01-28 00:54:06 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:54:06.077309 | orchestrator | 2026-01-28 00:54:06 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:09.114693 | orchestrator | 2026-01-28 00:54:09 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:09.116108 | orchestrator | 2026-01-28 00:54:09 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:09.119416 | orchestrator | 2026-01-28 00:54:09 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state STARTED 2026-01-28 00:54:09.119486 | orchestrator | 2026-01-28 00:54:09 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:12.168725 | orchestrator | 2026-01-28 00:54:12 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:12.169062 | orchestrator | 2026-01-28 00:54:12 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:12.170870 | orchestrator | 2026-01-28 00:54:12 | INFO  | Task 822eaf5a-6023-48f1-a343-6c70341af705 is in state SUCCESS 2026-01-28 00:54:12.173056 | orchestrator | 2026-01-28 00:54:12.173077 | orchestrator | 2026-01-28 00:54:12.173083 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 00:54:12.173091 | orchestrator | 2026-01-28 00:54:12.173097 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 00:54:12.173104 | orchestrator | Wednesday 28 January 2026 00:51:41 +0000 (0:00:00.148) 0:00:00.148 ***** 2026-01-28 00:54:12.173110 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:54:12.173117 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:54:12.173122 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:54:12.173128 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:54:12.173134 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:54:12.173139 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:54:12.173164 | orchestrator | 2026-01-28 00:54:12.173170 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 00:54:12.173176 | orchestrator | Wednesday 28 January 2026 00:51:42 +0000 (0:00:00.684) 0:00:00.832 ***** 2026-01-28 00:54:12.173182 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-01-28 00:54:12.173188 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-01-28 00:54:12.173194 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-01-28 00:54:12.173200 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-01-28 00:54:12.173206 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-01-28 00:54:12.173212 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-01-28 00:54:12.173217 | orchestrator | 2026-01-28 00:54:12.173223 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-01-28 00:54:12.173229 | orchestrator | 2026-01-28 00:54:12.173235 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-01-28 00:54:12.173240 | orchestrator | Wednesday 28 January 2026 00:51:43 +0000 (0:00:01.035) 0:00:01.867 ***** 2026-01-28 00:54:12.173247 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:54:12.173254 | orchestrator | 2026-01-28 00:54:12.173260 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-01-28 00:54:12.173266 | orchestrator | Wednesday 28 January 2026 00:51:45 +0000 (0:00:01.709) 0:00:03.577 ***** 2026-01-28 00:54:12.173274 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173282 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173289 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173337 | orchestrator | 2026-01-28 00:54:12.173343 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-01-28 00:54:12.173348 | orchestrator | Wednesday 28 January 2026 00:51:47 +0000 (0:00:01.671) 0:00:05.248 ***** 2026-01-28 00:54:12.173354 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173360 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173365 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173388 | orchestrator | 2026-01-28 00:54:12.173393 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-01-28 00:54:12.173399 | orchestrator | Wednesday 28 January 2026 00:51:49 +0000 (0:00:02.030) 0:00:07.278 ***** 2026-01-28 00:54:12.173404 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173418 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173428 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173451 | orchestrator | 2026-01-28 00:54:12.173457 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-01-28 00:54:12.173462 | orchestrator | Wednesday 28 January 2026 00:51:50 +0000 (0:00:01.459) 0:00:08.738 ***** 2026-01-28 00:54:12.173468 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173474 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173479 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173513 | orchestrator | 2026-01-28 00:54:12.173518 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-01-28 00:54:12.173524 | orchestrator | Wednesday 28 January 2026 00:51:52 +0000 (0:00:01.577) 0:00:10.316 ***** 2026-01-28 00:54:12.173530 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173535 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173541 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.173567 | orchestrator | 2026-01-28 00:54:12.173573 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-01-28 00:54:12.173578 | orchestrator | Wednesday 28 January 2026 00:51:53 +0000 (0:00:01.285) 0:00:11.601 ***** 2026-01-28 00:54:12.173584 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:54:12.173590 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:54:12.173645 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:54:12.173651 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:54:12.173656 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:54:12.173662 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:54:12.173673 | orchestrator | 2026-01-28 00:54:12.173679 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-01-28 00:54:12.173690 | orchestrator | Wednesday 28 January 2026 00:51:55 +0000 (0:00:02.569) 0:00:14.171 ***** 2026-01-28 00:54:12.173700 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-01-28 00:54:12.173707 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-01-28 00:54:12.173714 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-01-28 00:54:12.173723 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-01-28 00:54:12.173730 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-01-28 00:54:12.173736 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-01-28 00:54:12.173742 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-28 00:54:12.173748 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-28 00:54:12.173754 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-28 00:54:12.173760 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-28 00:54:12.173766 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-28 00:54:12.173772 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-01-28 00:54:12.173779 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-28 00:54:12.173786 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-28 00:54:12.173792 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-28 00:54:12.173799 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-28 00:54:12.173807 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-28 00:54:12.173816 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-01-28 00:54:12.173822 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-28 00:54:12.174094 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-28 00:54:12.174114 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-28 00:54:12.174120 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-28 00:54:12.174126 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-28 00:54:12.174131 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-01-28 00:54:12.174137 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-28 00:54:12.174142 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-28 00:54:12.174147 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-28 00:54:12.174153 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-28 00:54:12.174158 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-28 00:54:12.174164 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-01-28 00:54:12.174169 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-28 00:54:12.174175 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-28 00:54:12.174180 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-28 00:54:12.174186 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-28 00:54:12.174191 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-28 00:54:12.174196 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-01-28 00:54:12.174202 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-28 00:54:12.174211 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-28 00:54:12.174217 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-01-28 00:54:12.174222 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-28 00:54:12.174233 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-28 00:54:12.174239 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-01-28 00:54:12.174244 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-01-28 00:54:12.174250 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-01-28 00:54:12.174256 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-01-28 00:54:12.174261 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-01-28 00:54:12.174267 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-01-28 00:54:12.174272 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-01-28 00:54:12.174278 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-28 00:54:12.174287 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-28 00:54:12.174293 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-01-28 00:54:12.174298 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-28 00:54:12.174304 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-28 00:54:12.174310 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-01-28 00:54:12.174315 | orchestrator | 2026-01-28 00:54:12.174321 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-28 00:54:12.174326 | orchestrator | Wednesday 28 January 2026 00:52:15 +0000 (0:00:19.897) 0:00:34.069 ***** 2026-01-28 00:54:12.174332 | orchestrator | 2026-01-28 00:54:12.174337 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-28 00:54:12.174343 | orchestrator | Wednesday 28 January 2026 00:52:15 +0000 (0:00:00.069) 0:00:34.138 ***** 2026-01-28 00:54:12.174352 | orchestrator | 2026-01-28 00:54:12.174362 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-28 00:54:12.174367 | orchestrator | Wednesday 28 January 2026 00:52:16 +0000 (0:00:00.079) 0:00:34.217 ***** 2026-01-28 00:54:12.174373 | orchestrator | 2026-01-28 00:54:12.174378 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-28 00:54:12.174383 | orchestrator | Wednesday 28 January 2026 00:52:16 +0000 (0:00:00.072) 0:00:34.289 ***** 2026-01-28 00:54:12.174389 | orchestrator | 2026-01-28 00:54:12.174394 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-28 00:54:12.174399 | orchestrator | Wednesday 28 January 2026 00:52:16 +0000 (0:00:00.063) 0:00:34.353 ***** 2026-01-28 00:54:12.174405 | orchestrator | 2026-01-28 00:54:12.174410 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-01-28 00:54:12.174415 | orchestrator | Wednesday 28 January 2026 00:52:16 +0000 (0:00:00.126) 0:00:34.480 ***** 2026-01-28 00:54:12.174421 | orchestrator | 2026-01-28 00:54:12.174426 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-01-28 00:54:12.174432 | orchestrator | Wednesday 28 January 2026 00:52:16 +0000 (0:00:00.117) 0:00:34.598 ***** 2026-01-28 00:54:12.174437 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:54:12.174443 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:54:12.174448 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:54:12.174453 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:54:12.174459 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:54:12.174464 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:54:12.174469 | orchestrator | 2026-01-28 00:54:12.174475 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-01-28 00:54:12.174480 | orchestrator | Wednesday 28 January 2026 00:52:18 +0000 (0:00:02.577) 0:00:37.175 ***** 2026-01-28 00:54:12.174486 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:54:12.174491 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:54:12.174496 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:54:12.174502 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:54:12.174507 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:54:12.174512 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:54:12.174518 | orchestrator | 2026-01-28 00:54:12.174523 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-01-28 00:54:12.174528 | orchestrator | 2026-01-28 00:54:12.174534 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-28 00:54:12.174542 | orchestrator | Wednesday 28 January 2026 00:52:44 +0000 (0:00:25.673) 0:01:02.849 ***** 2026-01-28 00:54:12.174548 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:54:12.174557 | orchestrator | 2026-01-28 00:54:12.174563 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-28 00:54:12.174568 | orchestrator | Wednesday 28 January 2026 00:52:45 +0000 (0:00:00.723) 0:01:03.573 ***** 2026-01-28 00:54:12.174573 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:54:12.174579 | orchestrator | 2026-01-28 00:54:12.174588 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-01-28 00:54:12.174594 | orchestrator | Wednesday 28 January 2026 00:52:45 +0000 (0:00:00.540) 0:01:04.113 ***** 2026-01-28 00:54:12.174600 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:54:12.174605 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:54:12.174611 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:54:12.174616 | orchestrator | 2026-01-28 00:54:12.174622 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-01-28 00:54:12.174627 | orchestrator | Wednesday 28 January 2026 00:52:47 +0000 (0:00:01.236) 0:01:05.349 ***** 2026-01-28 00:54:12.174633 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:54:12.174638 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:54:12.174644 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:54:12.174649 | orchestrator | 2026-01-28 00:54:12.174655 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-01-28 00:54:12.174660 | orchestrator | Wednesday 28 January 2026 00:52:47 +0000 (0:00:00.664) 0:01:06.013 ***** 2026-01-28 00:54:12.174666 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:54:12.174671 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:54:12.174676 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:54:12.174682 | orchestrator | 2026-01-28 00:54:12.174687 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-01-28 00:54:12.174693 | orchestrator | Wednesday 28 January 2026 00:52:48 +0000 (0:00:00.536) 0:01:06.550 ***** 2026-01-28 00:54:12.174698 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:54:12.174704 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:54:12.174709 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:54:12.174715 | orchestrator | 2026-01-28 00:54:12.174720 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-01-28 00:54:12.174726 | orchestrator | Wednesday 28 January 2026 00:52:49 +0000 (0:00:00.795) 0:01:07.345 ***** 2026-01-28 00:54:12.174731 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:54:12.174736 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:54:12.174742 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:54:12.174747 | orchestrator | 2026-01-28 00:54:12.174753 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-01-28 00:54:12.174758 | orchestrator | Wednesday 28 January 2026 00:52:49 +0000 (0:00:00.801) 0:01:08.146 ***** 2026-01-28 00:54:12.174764 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.174769 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.174775 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.174780 | orchestrator | 2026-01-28 00:54:12.174786 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-01-28 00:54:12.174791 | orchestrator | Wednesday 28 January 2026 00:52:50 +0000 (0:00:00.347) 0:01:08.494 ***** 2026-01-28 00:54:12.174797 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.174802 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.174808 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.174813 | orchestrator | 2026-01-28 00:54:12.174818 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-01-28 00:54:12.174824 | orchestrator | Wednesday 28 January 2026 00:52:50 +0000 (0:00:00.261) 0:01:08.755 ***** 2026-01-28 00:54:12.174829 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.174835 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.174840 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.174846 | orchestrator | 2026-01-28 00:54:12.174851 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-01-28 00:54:12.174865 | orchestrator | Wednesday 28 January 2026 00:52:50 +0000 (0:00:00.260) 0:01:09.015 ***** 2026-01-28 00:54:12.174870 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.174876 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.174881 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.174887 | orchestrator | 2026-01-28 00:54:12.174892 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-01-28 00:54:12.174898 | orchestrator | Wednesday 28 January 2026 00:52:51 +0000 (0:00:00.517) 0:01:09.533 ***** 2026-01-28 00:54:12.174903 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.174909 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.174914 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.174920 | orchestrator | 2026-01-28 00:54:12.174925 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-01-28 00:54:12.174931 | orchestrator | Wednesday 28 January 2026 00:52:51 +0000 (0:00:00.298) 0:01:09.832 ***** 2026-01-28 00:54:12.174936 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.174941 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.174947 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.174952 | orchestrator | 2026-01-28 00:54:12.174958 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-01-28 00:54:12.174963 | orchestrator | Wednesday 28 January 2026 00:52:52 +0000 (0:00:00.391) 0:01:10.223 ***** 2026-01-28 00:54:12.174969 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.174974 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.174980 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.174985 | orchestrator | 2026-01-28 00:54:12.174990 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-01-28 00:54:12.175008 | orchestrator | Wednesday 28 January 2026 00:52:52 +0000 (0:00:00.483) 0:01:10.706 ***** 2026-01-28 00:54:12.175013 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.175019 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.175024 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.175030 | orchestrator | 2026-01-28 00:54:12.175035 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-01-28 00:54:12.175040 | orchestrator | Wednesday 28 January 2026 00:52:53 +0000 (0:00:00.712) 0:01:11.418 ***** 2026-01-28 00:54:12.175049 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.175054 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.175060 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.175065 | orchestrator | 2026-01-28 00:54:12.175071 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-01-28 00:54:12.175076 | orchestrator | Wednesday 28 January 2026 00:52:53 +0000 (0:00:00.272) 0:01:11.691 ***** 2026-01-28 00:54:12.175082 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.175087 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.175093 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.175098 | orchestrator | 2026-01-28 00:54:12.175107 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-01-28 00:54:12.175117 | orchestrator | Wednesday 28 January 2026 00:52:53 +0000 (0:00:00.282) 0:01:11.973 ***** 2026-01-28 00:54:12.175126 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.175136 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.175148 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.175160 | orchestrator | 2026-01-28 00:54:12.175169 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-01-28 00:54:12.175178 | orchestrator | Wednesday 28 January 2026 00:52:54 +0000 (0:00:00.243) 0:01:12.217 ***** 2026-01-28 00:54:12.175188 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.175197 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.175206 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.175216 | orchestrator | 2026-01-28 00:54:12.175224 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-01-28 00:54:12.175234 | orchestrator | Wednesday 28 January 2026 00:52:54 +0000 (0:00:00.280) 0:01:12.498 ***** 2026-01-28 00:54:12.175250 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:54:12.175261 | orchestrator | 2026-01-28 00:54:12.175269 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-01-28 00:54:12.175274 | orchestrator | Wednesday 28 January 2026 00:52:54 +0000 (0:00:00.646) 0:01:13.145 ***** 2026-01-28 00:54:12.175280 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:54:12.175285 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:54:12.175291 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:54:12.175296 | orchestrator | 2026-01-28 00:54:12.175301 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-01-28 00:54:12.175307 | orchestrator | Wednesday 28 January 2026 00:52:55 +0000 (0:00:00.419) 0:01:13.564 ***** 2026-01-28 00:54:12.175312 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:54:12.175317 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:54:12.175323 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:54:12.175328 | orchestrator | 2026-01-28 00:54:12.175333 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-01-28 00:54:12.175339 | orchestrator | Wednesday 28 January 2026 00:52:55 +0000 (0:00:00.412) 0:01:13.976 ***** 2026-01-28 00:54:12.175344 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.175350 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.175355 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.175360 | orchestrator | 2026-01-28 00:54:12.175366 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-01-28 00:54:12.175371 | orchestrator | Wednesday 28 January 2026 00:52:56 +0000 (0:00:00.496) 0:01:14.473 ***** 2026-01-28 00:54:12.175376 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.175382 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.175387 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.175392 | orchestrator | 2026-01-28 00:54:12.175398 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-01-28 00:54:12.175403 | orchestrator | Wednesday 28 January 2026 00:52:56 +0000 (0:00:00.329) 0:01:14.802 ***** 2026-01-28 00:54:12.175408 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.175414 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.175419 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.175424 | orchestrator | 2026-01-28 00:54:12.175430 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-01-28 00:54:12.175435 | orchestrator | Wednesday 28 January 2026 00:52:56 +0000 (0:00:00.304) 0:01:15.106 ***** 2026-01-28 00:54:12.175441 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.175446 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.175451 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.175457 | orchestrator | 2026-01-28 00:54:12.175462 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-01-28 00:54:12.175467 | orchestrator | Wednesday 28 January 2026 00:52:57 +0000 (0:00:00.309) 0:01:15.415 ***** 2026-01-28 00:54:12.175473 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.175478 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.175483 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.175489 | orchestrator | 2026-01-28 00:54:12.175494 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-01-28 00:54:12.175499 | orchestrator | Wednesday 28 January 2026 00:52:57 +0000 (0:00:00.450) 0:01:15.866 ***** 2026-01-28 00:54:12.175505 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.175510 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.175515 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.175521 | orchestrator | 2026-01-28 00:54:12.175526 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-28 00:54:12.175531 | orchestrator | Wednesday 28 January 2026 00:52:57 +0000 (0:00:00.323) 0:01:16.190 ***** 2026-01-28 00:54:12.175538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/k2026-01-28 00:54:12 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:12.175571 | orchestrator | olla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175616 | orchestrator | 2026-01-28 00:54:12.175621 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-28 00:54:12.175630 | orchestrator | Wednesday 28 January 2026 00:52:59 +0000 (0:00:01.596) 0:01:17.786 ***** 2026-01-28 00:54:12.175636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175697 | orchestrator | 2026-01-28 00:54:12.175703 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-28 00:54:12.175708 | orchestrator | Wednesday 28 January 2026 00:53:04 +0000 (0:00:04.627) 0:01:22.413 ***** 2026-01-28 00:54:12.175714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.175774 | orchestrator | 2026-01-28 00:54:12.175779 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-28 00:54:12.175784 | orchestrator | Wednesday 28 January 2026 00:53:06 +0000 (0:00:02.640) 0:01:25.054 ***** 2026-01-28 00:54:12.175790 | orchestrator | 2026-01-28 00:54:12.175795 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-28 00:54:12.175801 | orchestrator | Wednesday 28 January 2026 00:53:06 +0000 (0:00:00.062) 0:01:25.116 ***** 2026-01-28 00:54:12.175806 | orchestrator | 2026-01-28 00:54:12.175811 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-28 00:54:12.175817 | orchestrator | Wednesday 28 January 2026 00:53:06 +0000 (0:00:00.061) 0:01:25.177 ***** 2026-01-28 00:54:12.175822 | orchestrator | 2026-01-28 00:54:12.175827 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-28 00:54:12.175833 | orchestrator | Wednesday 28 January 2026 00:53:07 +0000 (0:00:00.058) 0:01:25.236 ***** 2026-01-28 00:54:12.175838 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:54:12.175844 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:54:12.175849 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:54:12.175854 | orchestrator | 2026-01-28 00:54:12.175860 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-28 00:54:12.175865 | orchestrator | Wednesday 28 January 2026 00:53:13 +0000 (0:00:06.712) 0:01:31.948 ***** 2026-01-28 00:54:12.175871 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:54:12.175876 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:54:12.175881 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:54:12.175887 | orchestrator | 2026-01-28 00:54:12.175892 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-28 00:54:12.175897 | orchestrator | Wednesday 28 January 2026 00:53:21 +0000 (0:00:07.793) 0:01:39.741 ***** 2026-01-28 00:54:12.175903 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:54:12.175908 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:54:12.175913 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:54:12.175919 | orchestrator | 2026-01-28 00:54:12.175927 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-28 00:54:12.175932 | orchestrator | Wednesday 28 January 2026 00:53:28 +0000 (0:00:07.455) 0:01:47.197 ***** 2026-01-28 00:54:12.175938 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.175943 | orchestrator | 2026-01-28 00:54:12.175948 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-28 00:54:12.175954 | orchestrator | Wednesday 28 January 2026 00:53:29 +0000 (0:00:00.329) 0:01:47.526 ***** 2026-01-28 00:54:12.175959 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:54:12.175965 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:54:12.175973 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:54:12.175979 | orchestrator | 2026-01-28 00:54:12.175984 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-28 00:54:12.175989 | orchestrator | Wednesday 28 January 2026 00:53:30 +0000 (0:00:00.932) 0:01:48.458 ***** 2026-01-28 00:54:12.176019 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.176025 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.176031 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:54:12.176036 | orchestrator | 2026-01-28 00:54:12.176042 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-28 00:54:12.176047 | orchestrator | Wednesday 28 January 2026 00:53:30 +0000 (0:00:00.676) 0:01:49.135 ***** 2026-01-28 00:54:12.176052 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:54:12.176058 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:54:12.176063 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:54:12.176068 | orchestrator | 2026-01-28 00:54:12.176074 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-28 00:54:12.176079 | orchestrator | Wednesday 28 January 2026 00:53:31 +0000 (0:00:01.002) 0:01:50.138 ***** 2026-01-28 00:54:12.176089 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.176094 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.176100 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:54:12.176105 | orchestrator | 2026-01-28 00:54:12.176110 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-28 00:54:12.176116 | orchestrator | Wednesday 28 January 2026 00:53:32 +0000 (0:00:00.695) 0:01:50.833 ***** 2026-01-28 00:54:12.176121 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:54:12.176126 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:54:12.176132 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:54:12.176137 | orchestrator | 2026-01-28 00:54:12.176142 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-28 00:54:12.176148 | orchestrator | Wednesday 28 January 2026 00:53:34 +0000 (0:00:01.469) 0:01:52.303 ***** 2026-01-28 00:54:12.176153 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:54:12.176159 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:54:12.176164 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:54:12.176169 | orchestrator | 2026-01-28 00:54:12.176175 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-01-28 00:54:12.176180 | orchestrator | Wednesday 28 January 2026 00:53:34 +0000 (0:00:00.866) 0:01:53.170 ***** 2026-01-28 00:54:12.176185 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:54:12.176191 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:54:12.176196 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:54:12.176201 | orchestrator | 2026-01-28 00:54:12.176207 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-01-28 00:54:12.176212 | orchestrator | Wednesday 28 January 2026 00:53:35 +0000 (0:00:00.280) 0:01:53.450 ***** 2026-01-28 00:54:12.176218 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176224 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176229 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176235 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176244 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176253 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176263 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176269 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176274 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176280 | orchestrator | 2026-01-28 00:54:12.176285 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-01-28 00:54:12.176291 | orchestrator | Wednesday 28 January 2026 00:53:36 +0000 (0:00:01.683) 0:01:55.134 ***** 2026-01-28 00:54:12.176296 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176302 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176307 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176313 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176342 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176359 | orchestrator | 2026-01-28 00:54:12.176364 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-01-28 00:54:12.176370 | orchestrator | Wednesday 28 January 2026 00:53:41 +0000 (0:00:04.335) 0:01:59.469 ***** 2026-01-28 00:54:12.176375 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176381 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176386 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176397 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176451 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 00:54:12.176457 | orchestrator | 2026-01-28 00:54:12.176462 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-28 00:54:12.176468 | orchestrator | Wednesday 28 January 2026 00:53:44 +0000 (0:00:03.056) 0:02:02.526 ***** 2026-01-28 00:54:12.176473 | orchestrator | 2026-01-28 00:54:12.176479 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-28 00:54:12.176484 | orchestrator | Wednesday 28 January 2026 00:53:44 +0000 (0:00:00.079) 0:02:02.608 ***** 2026-01-28 00:54:12.176490 | orchestrator | 2026-01-28 00:54:12.176495 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-01-28 00:54:12.176500 | orchestrator | Wednesday 28 January 2026 00:53:44 +0000 (0:00:00.186) 0:02:02.795 ***** 2026-01-28 00:54:12.176506 | orchestrator | 2026-01-28 00:54:12.176511 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-01-28 00:54:12.176516 | orchestrator | Wednesday 28 January 2026 00:53:44 +0000 (0:00:00.119) 0:02:02.914 ***** 2026-01-28 00:54:12.176522 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:54:12.176527 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:54:12.176533 | orchestrator | 2026-01-28 00:54:12.176538 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-01-28 00:54:12.176543 | orchestrator | Wednesday 28 January 2026 00:53:51 +0000 (0:00:06.427) 0:02:09.342 ***** 2026-01-28 00:54:12.176549 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:54:12.176554 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:54:12.176560 | orchestrator | 2026-01-28 00:54:12.176565 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-01-28 00:54:12.176570 | orchestrator | Wednesday 28 January 2026 00:53:57 +0000 (0:00:06.186) 0:02:15.529 ***** 2026-01-28 00:54:12.176576 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:54:12.176581 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:54:12.176587 | orchestrator | 2026-01-28 00:54:12.176592 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-01-28 00:54:12.176597 | orchestrator | Wednesday 28 January 2026 00:54:04 +0000 (0:00:06.825) 0:02:22.354 ***** 2026-01-28 00:54:12.176603 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:54:12.176608 | orchestrator | 2026-01-28 00:54:12.176613 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-01-28 00:54:12.176623 | orchestrator | Wednesday 28 January 2026 00:54:04 +0000 (0:00:00.174) 0:02:22.529 ***** 2026-01-28 00:54:12.176628 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:54:12.176633 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:54:12.176639 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:54:12.176644 | orchestrator | 2026-01-28 00:54:12.176650 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-01-28 00:54:12.176655 | orchestrator | Wednesday 28 January 2026 00:54:05 +0000 (0:00:00.910) 0:02:23.439 ***** 2026-01-28 00:54:12.176660 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.176666 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.176671 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:54:12.176677 | orchestrator | 2026-01-28 00:54:12.176682 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-01-28 00:54:12.176687 | orchestrator | Wednesday 28 January 2026 00:54:06 +0000 (0:00:00.782) 0:02:24.221 ***** 2026-01-28 00:54:12.176693 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:54:12.176698 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:54:12.176704 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:54:12.176709 | orchestrator | 2026-01-28 00:54:12.176714 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-01-28 00:54:12.176720 | orchestrator | Wednesday 28 January 2026 00:54:06 +0000 (0:00:00.797) 0:02:25.019 ***** 2026-01-28 00:54:12.176725 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:54:12.176731 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:54:12.176736 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:54:12.176741 | orchestrator | 2026-01-28 00:54:12.176747 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-01-28 00:54:12.176752 | orchestrator | Wednesday 28 January 2026 00:54:07 +0000 (0:00:00.864) 0:02:25.884 ***** 2026-01-28 00:54:12.176757 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:54:12.176763 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:54:12.176768 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:54:12.176773 | orchestrator | 2026-01-28 00:54:12.176779 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-01-28 00:54:12.176784 | orchestrator | Wednesday 28 January 2026 00:54:08 +0000 (0:00:00.759) 0:02:26.644 ***** 2026-01-28 00:54:12.176792 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:54:12.176798 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:54:12.176803 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:54:12.176809 | orchestrator | 2026-01-28 00:54:12.176814 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:54:12.176820 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-28 00:54:12.176828 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-28 00:54:12.176834 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-01-28 00:54:12.176839 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:54:12.176845 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:54:12.176850 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 00:54:12.176856 | orchestrator | 2026-01-28 00:54:12.176861 | orchestrator | 2026-01-28 00:54:12.176867 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:54:12.176872 | orchestrator | Wednesday 28 January 2026 00:54:09 +0000 (0:00:00.999) 0:02:27.643 ***** 2026-01-28 00:54:12.176877 | orchestrator | =============================================================================== 2026-01-28 00:54:12.176889 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 25.67s 2026-01-28 00:54:12.176898 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.90s 2026-01-28 00:54:12.176907 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.28s 2026-01-28 00:54:12.176915 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.98s 2026-01-28 00:54:12.176924 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.14s 2026-01-28 00:54:12.176932 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.63s 2026-01-28 00:54:12.176941 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.34s 2026-01-28 00:54:12.176949 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.06s 2026-01-28 00:54:12.176957 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.64s 2026-01-28 00:54:12.176966 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.58s 2026-01-28 00:54:12.176974 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.57s 2026-01-28 00:54:12.176983 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.03s 2026-01-28 00:54:12.176993 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.71s 2026-01-28 00:54:12.177015 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.68s 2026-01-28 00:54:12.177023 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.67s 2026-01-28 00:54:12.177031 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.60s 2026-01-28 00:54:12.177041 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.58s 2026-01-28 00:54:12.177047 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.47s 2026-01-28 00:54:12.177052 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.46s 2026-01-28 00:54:12.177057 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.29s 2026-01-28 00:54:15.218275 | orchestrator | 2026-01-28 00:54:15 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:15.219512 | orchestrator | 2026-01-28 00:54:15 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:15.219676 | orchestrator | 2026-01-28 00:54:15 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:18.250639 | orchestrator | 2026-01-28 00:54:18 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:18.252746 | orchestrator | 2026-01-28 00:54:18 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:18.252820 | orchestrator | 2026-01-28 00:54:18 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:21.299259 | orchestrator | 2026-01-28 00:54:21 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:21.302443 | orchestrator | 2026-01-28 00:54:21 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:21.302483 | orchestrator | 2026-01-28 00:54:21 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:24.342361 | orchestrator | 2026-01-28 00:54:24 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:24.345381 | orchestrator | 2026-01-28 00:54:24 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:24.346319 | orchestrator | 2026-01-28 00:54:24 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:27.387682 | orchestrator | 2026-01-28 00:54:27 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:27.389579 | orchestrator | 2026-01-28 00:54:27 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:27.389654 | orchestrator | 2026-01-28 00:54:27 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:30.440169 | orchestrator | 2026-01-28 00:54:30 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:30.440722 | orchestrator | 2026-01-28 00:54:30 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:30.440759 | orchestrator | 2026-01-28 00:54:30 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:33.483774 | orchestrator | 2026-01-28 00:54:33 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:33.485104 | orchestrator | 2026-01-28 00:54:33 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:33.485135 | orchestrator | 2026-01-28 00:54:33 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:36.533159 | orchestrator | 2026-01-28 00:54:36 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:36.536230 | orchestrator | 2026-01-28 00:54:36 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:36.536313 | orchestrator | 2026-01-28 00:54:36 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:39.579137 | orchestrator | 2026-01-28 00:54:39 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:39.580198 | orchestrator | 2026-01-28 00:54:39 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:39.580566 | orchestrator | 2026-01-28 00:54:39 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:42.619217 | orchestrator | 2026-01-28 00:54:42 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:42.619349 | orchestrator | 2026-01-28 00:54:42 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:42.619366 | orchestrator | 2026-01-28 00:54:42 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:45.656724 | orchestrator | 2026-01-28 00:54:45 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:45.658083 | orchestrator | 2026-01-28 00:54:45 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:45.658114 | orchestrator | 2026-01-28 00:54:45 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:48.704631 | orchestrator | 2026-01-28 00:54:48 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:48.706404 | orchestrator | 2026-01-28 00:54:48 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:48.706450 | orchestrator | 2026-01-28 00:54:48 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:51.746924 | orchestrator | 2026-01-28 00:54:51 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:51.747706 | orchestrator | 2026-01-28 00:54:51 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:51.750257 | orchestrator | 2026-01-28 00:54:51 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:54.787515 | orchestrator | 2026-01-28 00:54:54 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:54.787649 | orchestrator | 2026-01-28 00:54:54 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:54.787678 | orchestrator | 2026-01-28 00:54:54 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:54:57.838438 | orchestrator | 2026-01-28 00:54:57 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:54:57.840609 | orchestrator | 2026-01-28 00:54:57 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:54:57.840686 | orchestrator | 2026-01-28 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:00.897171 | orchestrator | 2026-01-28 00:55:00 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:00.897299 | orchestrator | 2026-01-28 00:55:00 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:00.897316 | orchestrator | 2026-01-28 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:03.934414 | orchestrator | 2026-01-28 00:55:03 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:03.936892 | orchestrator | 2026-01-28 00:55:03 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:03.937046 | orchestrator | 2026-01-28 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:06.982563 | orchestrator | 2026-01-28 00:55:06 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:06.982648 | orchestrator | 2026-01-28 00:55:06 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:06.982657 | orchestrator | 2026-01-28 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:10.028840 | orchestrator | 2026-01-28 00:55:10 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:10.029325 | orchestrator | 2026-01-28 00:55:10 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:10.029357 | orchestrator | 2026-01-28 00:55:10 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:13.064800 | orchestrator | 2026-01-28 00:55:13 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:13.064897 | orchestrator | 2026-01-28 00:55:13 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:13.064933 | orchestrator | 2026-01-28 00:55:13 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:16.098257 | orchestrator | 2026-01-28 00:55:16 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:16.099583 | orchestrator | 2026-01-28 00:55:16 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:16.099673 | orchestrator | 2026-01-28 00:55:16 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:19.134273 | orchestrator | 2026-01-28 00:55:19 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:19.135825 | orchestrator | 2026-01-28 00:55:19 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:19.135854 | orchestrator | 2026-01-28 00:55:19 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:22.205459 | orchestrator | 2026-01-28 00:55:22 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:22.207985 | orchestrator | 2026-01-28 00:55:22 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:22.208303 | orchestrator | 2026-01-28 00:55:22 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:25.252179 | orchestrator | 2026-01-28 00:55:25 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:25.253523 | orchestrator | 2026-01-28 00:55:25 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:25.253782 | orchestrator | 2026-01-28 00:55:25 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:28.301138 | orchestrator | 2026-01-28 00:55:28 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:28.302269 | orchestrator | 2026-01-28 00:55:28 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:28.302315 | orchestrator | 2026-01-28 00:55:28 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:31.355245 | orchestrator | 2026-01-28 00:55:31 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:31.355912 | orchestrator | 2026-01-28 00:55:31 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:31.355947 | orchestrator | 2026-01-28 00:55:31 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:34.387702 | orchestrator | 2026-01-28 00:55:34 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:34.387796 | orchestrator | 2026-01-28 00:55:34 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:34.387810 | orchestrator | 2026-01-28 00:55:34 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:37.424130 | orchestrator | 2026-01-28 00:55:37 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:37.424775 | orchestrator | 2026-01-28 00:55:37 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:37.424928 | orchestrator | 2026-01-28 00:55:37 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:40.469496 | orchestrator | 2026-01-28 00:55:40 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:40.469896 | orchestrator | 2026-01-28 00:55:40 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:40.470196 | orchestrator | 2026-01-28 00:55:40 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:43.515320 | orchestrator | 2026-01-28 00:55:43 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:43.517098 | orchestrator | 2026-01-28 00:55:43 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:43.517132 | orchestrator | 2026-01-28 00:55:43 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:46.561356 | orchestrator | 2026-01-28 00:55:46 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:46.567854 | orchestrator | 2026-01-28 00:55:46 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:46.567930 | orchestrator | 2026-01-28 00:55:46 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:49.607698 | orchestrator | 2026-01-28 00:55:49 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:49.609196 | orchestrator | 2026-01-28 00:55:49 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:49.609692 | orchestrator | 2026-01-28 00:55:49 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:52.661468 | orchestrator | 2026-01-28 00:55:52 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:52.664540 | orchestrator | 2026-01-28 00:55:52 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:52.666167 | orchestrator | 2026-01-28 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:55.712686 | orchestrator | 2026-01-28 00:55:55 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:55.713761 | orchestrator | 2026-01-28 00:55:55 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:55.714106 | orchestrator | 2026-01-28 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:55:58.758237 | orchestrator | 2026-01-28 00:55:58 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:55:58.759593 | orchestrator | 2026-01-28 00:55:58 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:55:58.759805 | orchestrator | 2026-01-28 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:01.823572 | orchestrator | 2026-01-28 00:56:01 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:01.826211 | orchestrator | 2026-01-28 00:56:01 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:01.826258 | orchestrator | 2026-01-28 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:04.875422 | orchestrator | 2026-01-28 00:56:04 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:04.877070 | orchestrator | 2026-01-28 00:56:04 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:04.877104 | orchestrator | 2026-01-28 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:07.926458 | orchestrator | 2026-01-28 00:56:07 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:07.926564 | orchestrator | 2026-01-28 00:56:07 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:07.926582 | orchestrator | 2026-01-28 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:10.977654 | orchestrator | 2026-01-28 00:56:10 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:10.977763 | orchestrator | 2026-01-28 00:56:10 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:10.977779 | orchestrator | 2026-01-28 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:14.025271 | orchestrator | 2026-01-28 00:56:14 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:14.025372 | orchestrator | 2026-01-28 00:56:14 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:14.025385 | orchestrator | 2026-01-28 00:56:14 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:17.083320 | orchestrator | 2026-01-28 00:56:17 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:17.083876 | orchestrator | 2026-01-28 00:56:17 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:17.084117 | orchestrator | 2026-01-28 00:56:17 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:20.129556 | orchestrator | 2026-01-28 00:56:20 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:20.131699 | orchestrator | 2026-01-28 00:56:20 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:20.131777 | orchestrator | 2026-01-28 00:56:20 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:23.181457 | orchestrator | 2026-01-28 00:56:23 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:23.182207 | orchestrator | 2026-01-28 00:56:23 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:23.182555 | orchestrator | 2026-01-28 00:56:23 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:26.237391 | orchestrator | 2026-01-28 00:56:26 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:26.238863 | orchestrator | 2026-01-28 00:56:26 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:26.238911 | orchestrator | 2026-01-28 00:56:26 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:29.282147 | orchestrator | 2026-01-28 00:56:29 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:29.284072 | orchestrator | 2026-01-28 00:56:29 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:29.284337 | orchestrator | 2026-01-28 00:56:29 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:32.328841 | orchestrator | 2026-01-28 00:56:32 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:32.330389 | orchestrator | 2026-01-28 00:56:32 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:32.330818 | orchestrator | 2026-01-28 00:56:32 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:35.381677 | orchestrator | 2026-01-28 00:56:35 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:35.386592 | orchestrator | 2026-01-28 00:56:35 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:35.386655 | orchestrator | 2026-01-28 00:56:35 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:38.429826 | orchestrator | 2026-01-28 00:56:38 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:38.431056 | orchestrator | 2026-01-28 00:56:38 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:38.431261 | orchestrator | 2026-01-28 00:56:38 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:41.477349 | orchestrator | 2026-01-28 00:56:41 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:41.478894 | orchestrator | 2026-01-28 00:56:41 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:41.479000 | orchestrator | 2026-01-28 00:56:41 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:44.524372 | orchestrator | 2026-01-28 00:56:44 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:44.524449 | orchestrator | 2026-01-28 00:56:44 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:44.524459 | orchestrator | 2026-01-28 00:56:44 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:47.569545 | orchestrator | 2026-01-28 00:56:47 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:47.570459 | orchestrator | 2026-01-28 00:56:47 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:47.570496 | orchestrator | 2026-01-28 00:56:47 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:50.612555 | orchestrator | 2026-01-28 00:56:50 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:50.613853 | orchestrator | 2026-01-28 00:56:50 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:50.614209 | orchestrator | 2026-01-28 00:56:50 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:53.661615 | orchestrator | 2026-01-28 00:56:53 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:53.664119 | orchestrator | 2026-01-28 00:56:53 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:53.664170 | orchestrator | 2026-01-28 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:56.713309 | orchestrator | 2026-01-28 00:56:56 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:56.716040 | orchestrator | 2026-01-28 00:56:56 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:56.716099 | orchestrator | 2026-01-28 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:56:59.767521 | orchestrator | 2026-01-28 00:56:59 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:56:59.768818 | orchestrator | 2026-01-28 00:56:59 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state STARTED 2026-01-28 00:56:59.768869 | orchestrator | 2026-01-28 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:02.806736 | orchestrator | 2026-01-28 00:57:02 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:02.815873 | orchestrator | 2026-01-28 00:57:02 | INFO  | Task b90fc0f8-92cb-41ec-8105-64e6204ffa9f is in state SUCCESS 2026-01-28 00:57:02.821010 | orchestrator | 2026-01-28 00:57:02.821109 | orchestrator | 2026-01-28 00:57:02.821135 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 00:57:02.821156 | orchestrator | 2026-01-28 00:57:02.821178 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 00:57:02.821201 | orchestrator | Wednesday 28 January 2026 00:50:33 +0000 (0:00:00.434) 0:00:00.434 ***** 2026-01-28 00:57:02.821223 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:57:02.821246 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:57:02.821399 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:57:02.821420 | orchestrator | 2026-01-28 00:57:02.821440 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 00:57:02.821457 | orchestrator | Wednesday 28 January 2026 00:50:33 +0000 (0:00:00.265) 0:00:00.700 ***** 2026-01-28 00:57:02.821521 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-01-28 00:57:02.821542 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-01-28 00:57:02.821559 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-01-28 00:57:02.821576 | orchestrator | 2026-01-28 00:57:02.821594 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-01-28 00:57:02.821611 | orchestrator | 2026-01-28 00:57:02.821629 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-28 00:57:02.821648 | orchestrator | Wednesday 28 January 2026 00:50:34 +0000 (0:00:00.553) 0:00:01.253 ***** 2026-01-28 00:57:02.821668 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.821814 | orchestrator | 2026-01-28 00:57:02.821833 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-01-28 00:57:02.821848 | orchestrator | Wednesday 28 January 2026 00:50:35 +0000 (0:00:00.729) 0:00:01.983 ***** 2026-01-28 00:57:02.821864 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:57:02.821880 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:57:02.821895 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:57:02.821910 | orchestrator | 2026-01-28 00:57:02.822223 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-28 00:57:02.822255 | orchestrator | Wednesday 28 January 2026 00:50:35 +0000 (0:00:00.878) 0:00:02.861 ***** 2026-01-28 00:57:02.822304 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.822320 | orchestrator | 2026-01-28 00:57:02.822486 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-01-28 00:57:02.822498 | orchestrator | Wednesday 28 January 2026 00:50:37 +0000 (0:00:01.191) 0:00:04.052 ***** 2026-01-28 00:57:02.822507 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:57:02.822518 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:57:02.822527 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:57:02.822559 | orchestrator | 2026-01-28 00:57:02.822570 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-01-28 00:57:02.822587 | orchestrator | Wednesday 28 January 2026 00:50:37 +0000 (0:00:00.551) 0:00:04.603 ***** 2026-01-28 00:57:02.822604 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-28 00:57:02.822620 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-28 00:57:02.822636 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-28 00:57:02.822652 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-01-28 00:57:02.822667 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-28 00:57:02.822682 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-28 00:57:02.822700 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-28 00:57:02.822754 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-28 00:57:02.822773 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-28 00:57:02.822857 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-01-28 00:57:02.822868 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-01-28 00:57:02.822878 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-01-28 00:57:02.822887 | orchestrator | 2026-01-28 00:57:02.822897 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-28 00:57:02.822906 | orchestrator | Wednesday 28 January 2026 00:50:41 +0000 (0:00:04.065) 0:00:08.669 ***** 2026-01-28 00:57:02.822946 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-28 00:57:02.822958 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-28 00:57:02.822968 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-28 00:57:02.822977 | orchestrator | 2026-01-28 00:57:02.822988 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-28 00:57:02.822997 | orchestrator | Wednesday 28 January 2026 00:50:42 +0000 (0:00:00.691) 0:00:09.361 ***** 2026-01-28 00:57:02.823060 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-01-28 00:57:02.823072 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-01-28 00:57:02.823082 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-01-28 00:57:02.823091 | orchestrator | 2026-01-28 00:57:02.823148 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-28 00:57:02.823159 | orchestrator | Wednesday 28 January 2026 00:50:44 +0000 (0:00:02.090) 0:00:11.451 ***** 2026-01-28 00:57:02.823169 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-01-28 00:57:02.823179 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.823205 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-01-28 00:57:02.823215 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.823249 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-01-28 00:57:02.823259 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.823268 | orchestrator | 2026-01-28 00:57:02.823300 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-01-28 00:57:02.823310 | orchestrator | Wednesday 28 January 2026 00:50:45 +0000 (0:00:00.845) 0:00:12.297 ***** 2026-01-28 00:57:02.823322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-28 00:57:02.823350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-28 00:57:02.823397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-28 00:57:02.823409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-28 00:57:02.823427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-28 00:57:02.823485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-28 00:57:02.823497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-28 00:57:02.823515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-28 00:57:02.823525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-28 00:57:02.823535 | orchestrator | 2026-01-28 00:57:02.823545 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-01-28 00:57:02.823646 | orchestrator | Wednesday 28 January 2026 00:50:48 +0000 (0:00:03.149) 0:00:15.447 ***** 2026-01-28 00:57:02.823658 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.823668 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.823677 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.823687 | orchestrator | 2026-01-28 00:57:02.823697 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-01-28 00:57:02.823728 | orchestrator | Wednesday 28 January 2026 00:50:49 +0000 (0:00:01.110) 0:00:16.557 ***** 2026-01-28 00:57:02.823739 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-01-28 00:57:02.823748 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-01-28 00:57:02.823758 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-01-28 00:57:02.823820 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-01-28 00:57:02.823830 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-01-28 00:57:02.823839 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-01-28 00:57:02.823849 | orchestrator | 2026-01-28 00:57:02.823858 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-01-28 00:57:02.823868 | orchestrator | Wednesday 28 January 2026 00:50:51 +0000 (0:00:02.202) 0:00:18.760 ***** 2026-01-28 00:57:02.823877 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.823984 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.824023 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.824033 | orchestrator | 2026-01-28 00:57:02.824047 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-01-28 00:57:02.824064 | orchestrator | Wednesday 28 January 2026 00:50:54 +0000 (0:00:02.555) 0:00:21.315 ***** 2026-01-28 00:57:02.824112 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:57:02.824134 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:57:02.824145 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:57:02.824154 | orchestrator | 2026-01-28 00:57:02.824167 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-01-28 00:57:02.824181 | orchestrator | Wednesday 28 January 2026 00:50:56 +0000 (0:00:01.733) 0:00:23.049 ***** 2026-01-28 00:57:02.824192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.824220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.824231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.824242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e1a65f319f045b20d291aacb262586a084aeb0a8', '__omit_place_holder__e1a65f319f045b20d291aacb262586a084aeb0a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-28 00:57:02.824252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.824262 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.824272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.824286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.824421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e1a65f319f045b20d291aacb262586a084aeb0a8', '__omit_place_holder__e1a65f319f045b20d291aacb262586a084aeb0a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-28 00:57:02.824435 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.824446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.824456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.824466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.824477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e1a65f319f045b20d291aacb262586a084aeb0a8', '__omit_place_holder__e1a65f319f045b20d291aacb262586a084aeb0a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-28 00:57:02.824487 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.824496 | orchestrator | 2026-01-28 00:57:02.824524 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-01-28 00:57:02.824532 | orchestrator | Wednesday 28 January 2026 00:50:56 +0000 (0:00:00.601) 0:00:23.651 ***** 2026-01-28 00:57:02.824545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-28 00:57:02.824564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-28 00:57:02.824573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-28 00:57:02.824581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-28 00:57:02.824590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.824598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e1a65f319f045b20d291aacb262586a084aeb0a8', '__omit_place_holder__e1a65f319f045b20d291aacb262586a084aeb0a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-28 00:57:02.824637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-28 00:57:02.824653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.824698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e1a65f319f045b20d291aacb262586a084aeb0a8', '__omit_place_holder__e1a65f319f045b20d291aacb262586a084aeb0a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-28 00:57:02.824708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-28 00:57:02.824716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.824725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e1a65f319f045b20d291aacb262586a084aeb0a8', '__omit_place_holder__e1a65f319f045b20d291aacb262586a084aeb0a8'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-01-28 00:57:02.824733 | orchestrator | 2026-01-28 00:57:02.824741 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-01-28 00:57:02.824749 | orchestrator | Wednesday 28 January 2026 00:51:00 +0000 (0:00:03.443) 0:00:27.094 ***** 2026-01-28 00:57:02.824761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-28 00:57:02.824778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-28 00:57:02.824794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-28 00:57:02.824803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-28 00:57:02.824811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-28 00:57:02.824819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-28 00:57:02.824827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-28 00:57:02.824844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-28 00:57:02.824853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-28 00:57:02.824861 | orchestrator | 2026-01-28 00:57:02.824869 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-01-28 00:57:02.824877 | orchestrator | Wednesday 28 January 2026 00:51:04 +0000 (0:00:04.424) 0:00:31.519 ***** 2026-01-28 00:57:02.824885 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-28 00:57:02.824898 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-28 00:57:02.824906 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-01-28 00:57:02.824932 | orchestrator | 2026-01-28 00:57:02.824942 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-01-28 00:57:02.824950 | orchestrator | Wednesday 28 January 2026 00:51:07 +0000 (0:00:02.725) 0:00:34.245 ***** 2026-01-28 00:57:02.824958 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-28 00:57:02.824966 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-28 00:57:02.824974 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-01-28 00:57:02.824982 | orchestrator | 2026-01-28 00:57:02.824989 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-01-28 00:57:02.824997 | orchestrator | Wednesday 28 January 2026 00:51:11 +0000 (0:00:03.962) 0:00:38.207 ***** 2026-01-28 00:57:02.825005 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.825013 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.825021 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.825029 | orchestrator | 2026-01-28 00:57:02.825036 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-01-28 00:57:02.825044 | orchestrator | Wednesday 28 January 2026 00:51:11 +0000 (0:00:00.580) 0:00:38.788 ***** 2026-01-28 00:57:02.825052 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-28 00:57:02.825062 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-28 00:57:02.825070 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-01-28 00:57:02.825102 | orchestrator | 2026-01-28 00:57:02.825111 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-01-28 00:57:02.825119 | orchestrator | Wednesday 28 January 2026 00:51:15 +0000 (0:00:03.841) 0:00:42.630 ***** 2026-01-28 00:57:02.825134 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-28 00:57:02.825142 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-28 00:57:02.825150 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-01-28 00:57:02.825158 | orchestrator | 2026-01-28 00:57:02.825166 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-01-28 00:57:02.825173 | orchestrator | Wednesday 28 January 2026 00:51:18 +0000 (0:00:02.868) 0:00:45.498 ***** 2026-01-28 00:57:02.825181 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-01-28 00:57:02.825189 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-01-28 00:57:02.825250 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-01-28 00:57:02.825258 | orchestrator | 2026-01-28 00:57:02.825266 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-01-28 00:57:02.825274 | orchestrator | Wednesday 28 January 2026 00:51:20 +0000 (0:00:02.057) 0:00:47.556 ***** 2026-01-28 00:57:02.825282 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-01-28 00:57:02.825290 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-01-28 00:57:02.825297 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-01-28 00:57:02.825305 | orchestrator | 2026-01-28 00:57:02.825313 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-01-28 00:57:02.825321 | orchestrator | Wednesday 28 January 2026 00:51:22 +0000 (0:00:01.740) 0:00:49.297 ***** 2026-01-28 00:57:02.825333 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.825341 | orchestrator | 2026-01-28 00:57:02.825349 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-01-28 00:57:02.825356 | orchestrator | Wednesday 28 January 2026 00:51:23 +0000 (0:00:00.824) 0:00:50.121 ***** 2026-01-28 00:57:02.825384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-28 00:57:02.825402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-28 00:57:02.825411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-28 00:57:02.825426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-28 00:57:02.825435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-28 00:57:02.825443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-28 00:57:02.825493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-28 00:57:02.825502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-28 00:57:02.825516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-28 00:57:02.825524 | orchestrator | 2026-01-28 00:57:02.825532 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-01-28 00:57:02.825540 | orchestrator | Wednesday 28 January 2026 00:51:27 +0000 (0:00:04.009) 0:00:54.130 ***** 2026-01-28 00:57:02.825549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.825562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.825570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.825578 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.825587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.825616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.825630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.825639 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.825647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.825660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.825668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.825676 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.825684 | orchestrator | 2026-01-28 00:57:02.825692 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-01-28 00:57:02.825700 | orchestrator | Wednesday 28 January 2026 00:51:29 +0000 (0:00:02.452) 0:00:56.582 ***** 2026-01-28 00:57:02.825712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.825720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.825734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.825742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.825755 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.825764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.825772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.825780 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.825794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.825812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.825826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.825877 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.825893 | orchestrator | 2026-01-28 00:57:02.825952 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-28 00:57:02.826004 | orchestrator | Wednesday 28 January 2026 00:51:31 +0000 (0:00:01.636) 0:00:58.219 ***** 2026-01-28 00:57:02.826066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.826080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.826088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.826096 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.826104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.826117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.826126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.826134 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.826147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.826164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.826173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.826181 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.826189 | orchestrator | 2026-01-28 00:57:02.826197 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-28 00:57:02.826205 | orchestrator | Wednesday 28 January 2026 00:51:32 +0000 (0:00:01.271) 0:00:59.490 ***** 2026-01-28 00:57:02.826213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.826221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.826252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.826260 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.826274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.826289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.826297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.826305 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.826313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.826322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.826333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.826342 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.826350 | orchestrator | 2026-01-28 00:57:02.826357 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-28 00:57:02.826370 | orchestrator | Wednesday 28 January 2026 00:51:33 +0000 (0:00:00.576) 0:01:00.066 ***** 2026-01-28 00:57:02.826379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.826393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.826401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.826409 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.826417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.826426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.826434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.826442 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.826458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.826471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.826479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.826487 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.826495 | orchestrator | 2026-01-28 00:57:02.826503 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-01-28 00:57:02.826511 | orchestrator | Wednesday 28 January 2026 00:51:33 +0000 (0:00:00.761) 0:01:00.828 ***** 2026-01-28 00:57:02.826519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.826528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.826536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.826548 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.826560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.826569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.826582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.826590 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.826599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.826607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.826615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.826623 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.826636 | orchestrator | 2026-01-28 00:57:02.826644 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-01-28 00:57:02.826652 | orchestrator | Wednesday 28 January 2026 00:51:34 +0000 (0:00:00.977) 0:01:01.805 ***** 2026-01-28 00:57:02.826664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.826672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.826686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.826694 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.826702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.826711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.826719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.826732 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.826740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.826752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.826760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.826768 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.826776 | orchestrator | 2026-01-28 00:57:02.826784 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-01-28 00:57:02.826796 | orchestrator | Wednesday 28 January 2026 00:51:35 +0000 (0:00:00.512) 0:01:02.318 ***** 2026-01-28 00:57:02.826804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.826813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.826821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.826834 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.826842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.826854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.826863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.826871 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.826884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-01-28 00:57:02.826893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-01-28 00:57:02.826901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-01-28 00:57:02.827039 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.827064 | orchestrator | 2026-01-28 00:57:02.827072 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-01-28 00:57:02.827081 | orchestrator | Wednesday 28 January 2026 00:51:36 +0000 (0:00:00.755) 0:01:03.073 ***** 2026-01-28 00:57:02.827089 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-28 00:57:02.827097 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-28 00:57:02.827105 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-01-28 00:57:02.827113 | orchestrator | 2026-01-28 00:57:02.827121 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-01-28 00:57:02.827129 | orchestrator | Wednesday 28 January 2026 00:51:38 +0000 (0:00:01.928) 0:01:05.001 ***** 2026-01-28 00:57:02.827137 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-28 00:57:02.827145 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-28 00:57:02.827153 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-01-28 00:57:02.827161 | orchestrator | 2026-01-28 00:57:02.827169 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-01-28 00:57:02.827177 | orchestrator | Wednesday 28 January 2026 00:51:39 +0000 (0:00:01.567) 0:01:06.569 ***** 2026-01-28 00:57:02.827185 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-28 00:57:02.827192 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-28 00:57:02.827206 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-01-28 00:57:02.827215 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-28 00:57:02.827223 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.827230 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-28 00:57:02.827238 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.827246 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-28 00:57:02.827254 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.827262 | orchestrator | 2026-01-28 00:57:02.827269 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-01-28 00:57:02.827277 | orchestrator | Wednesday 28 January 2026 00:51:40 +0000 (0:00:00.997) 0:01:07.566 ***** 2026-01-28 00:57:02.827296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-01-28 00:57:02.827305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-01-28 00:57:02.827321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-01-28 00:57:02.827330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-28 00:57:02.827339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-28 00:57:02.827351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-01-28 00:57:02.827358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-28 00:57:02.827370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-28 00:57:02.827382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-01-28 00:57:02.827388 | orchestrator | 2026-01-28 00:57:02.827395 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-01-28 00:57:02.827402 | orchestrator | Wednesday 28 January 2026 00:51:43 +0000 (0:00:02.902) 0:01:10.469 ***** 2026-01-28 00:57:02.827409 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.827416 | orchestrator | 2026-01-28 00:57:02.827422 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-01-28 00:57:02.827429 | orchestrator | Wednesday 28 January 2026 00:51:44 +0000 (0:00:00.800) 0:01:11.269 ***** 2026-01-28 00:57:02.827437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-28 00:57:02.827444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-28 00:57:02.827455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.827462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.827474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-28 00:57:02.827485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-28 00:57:02.827493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-01-28 00:57:02.827500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.827510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-28 00:57:02.827517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.827532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.827540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.827547 | orchestrator | 2026-01-28 00:57:02.827554 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-01-28 00:57:02.827560 | orchestrator | Wednesday 28 January 2026 00:51:49 +0000 (0:00:05.036) 0:01:16.305 ***** 2026-01-28 00:57:02.827568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-28 00:57:02.827575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-28 00:57:02.827585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.827593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.827604 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.827615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-28 00:57:02.827622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-28 00:57:02.827629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.830101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.830147 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.830164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-01-28 00:57:02.830173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-01-28 00:57:02.830189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.830196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.830203 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.830211 | orchestrator | 2026-01-28 00:57:02.830218 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-01-28 00:57:02.830225 | orchestrator | Wednesday 28 January 2026 00:51:50 +0000 (0:00:01.027) 0:01:17.333 ***** 2026-01-28 00:57:02.830233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-28 00:57:02.830241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-28 00:57:02.830249 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.830256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-28 00:57:02.830263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-28 00:57:02.830269 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.830276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-01-28 00:57:02.830297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-01-28 00:57:02.830304 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.830311 | orchestrator | 2026-01-28 00:57:02.830317 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-01-28 00:57:02.830324 | orchestrator | Wednesday 28 January 2026 00:51:51 +0000 (0:00:00.977) 0:01:18.311 ***** 2026-01-28 00:57:02.830331 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.830338 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.830345 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.830352 | orchestrator | 2026-01-28 00:57:02.830363 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-01-28 00:57:02.830373 | orchestrator | Wednesday 28 January 2026 00:51:52 +0000 (0:00:01.247) 0:01:19.558 ***** 2026-01-28 00:57:02.830380 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.830387 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.830394 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.830401 | orchestrator | 2026-01-28 00:57:02.830408 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-01-28 00:57:02.830414 | orchestrator | Wednesday 28 January 2026 00:51:54 +0000 (0:00:01.771) 0:01:21.330 ***** 2026-01-28 00:57:02.830421 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.830428 | orchestrator | 2026-01-28 00:57:02.830435 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-01-28 00:57:02.830441 | orchestrator | Wednesday 28 January 2026 00:51:55 +0000 (0:00:00.814) 0:01:22.145 ***** 2026-01-28 00:57:02.830449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.830458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.830466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.830479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.830497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.830504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.830512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.830519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.830526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.830534 | orchestrator | 2026-01-28 00:57:02.830540 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-01-28 00:57:02.830548 | orchestrator | Wednesday 28 January 2026 00:52:00 +0000 (0:00:04.863) 0:01:27.008 ***** 2026-01-28 00:57:02.830564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.830572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.830594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.830601 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.830609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.830616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.830627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.830638 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.830648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.830656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.830663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.830670 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.830677 | orchestrator | 2026-01-28 00:57:02.830684 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-01-28 00:57:02.830691 | orchestrator | Wednesday 28 January 2026 00:52:00 +0000 (0:00:00.585) 0:01:27.594 ***** 2026-01-28 00:57:02.830698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-28 00:57:02.830706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-28 00:57:02.830713 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.830720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-28 00:57:02.830727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-28 00:57:02.830738 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.830772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-28 00:57:02.830779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-01-28 00:57:02.830787 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.830793 | orchestrator | 2026-01-28 00:57:02.830804 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-01-28 00:57:02.830811 | orchestrator | Wednesday 28 January 2026 00:52:01 +0000 (0:00:00.866) 0:01:28.460 ***** 2026-01-28 00:57:02.830818 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.830825 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.830832 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.830839 | orchestrator | 2026-01-28 00:57:02.830845 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-01-28 00:57:02.830852 | orchestrator | Wednesday 28 January 2026 00:52:02 +0000 (0:00:01.264) 0:01:29.725 ***** 2026-01-28 00:57:02.830859 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.830866 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.830872 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.830879 | orchestrator | 2026-01-28 00:57:02.830889 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-01-28 00:57:02.830896 | orchestrator | Wednesday 28 January 2026 00:52:04 +0000 (0:00:02.012) 0:01:31.738 ***** 2026-01-28 00:57:02.830903 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.830910 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.830942 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.830950 | orchestrator | 2026-01-28 00:57:02.830957 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-01-28 00:57:02.830963 | orchestrator | Wednesday 28 January 2026 00:52:05 +0000 (0:00:00.267) 0:01:32.005 ***** 2026-01-28 00:57:02.830970 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.830977 | orchestrator | 2026-01-28 00:57:02.830983 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-01-28 00:57:02.830990 | orchestrator | Wednesday 28 January 2026 00:52:05 +0000 (0:00:00.715) 0:01:32.721 ***** 2026-01-28 00:57:02.830998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-28 00:57:02.831006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-28 00:57:02.831018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-01-28 00:57:02.831025 | orchestrator | 2026-01-28 00:57:02.831031 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-01-28 00:57:02.831038 | orchestrator | Wednesday 28 January 2026 00:52:08 +0000 (0:00:02.622) 0:01:35.343 ***** 2026-01-28 00:57:02.831053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-28 00:57:02.831060 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.831067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-28 00:57:02.831074 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.831082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-01-28 00:57:02.831093 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.831100 | orchestrator | 2026-01-28 00:57:02.831106 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-01-28 00:57:02.831113 | orchestrator | Wednesday 28 January 2026 00:52:09 +0000 (0:00:01.567) 0:01:36.910 ***** 2026-01-28 00:57:02.831122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-28 00:57:02.831130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-28 00:57:02.831138 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.831145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-28 00:57:02.831158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-28 00:57:02.831165 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.831175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-28 00:57:02.831182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-01-28 00:57:02.831189 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.831196 | orchestrator | 2026-01-28 00:57:02.831203 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-01-28 00:57:02.831210 | orchestrator | Wednesday 28 January 2026 00:52:12 +0000 (0:00:02.181) 0:01:39.091 ***** 2026-01-28 00:57:02.831216 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.831223 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.831230 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.831237 | orchestrator | 2026-01-28 00:57:02.831243 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-01-28 00:57:02.831250 | orchestrator | Wednesday 28 January 2026 00:52:12 +0000 (0:00:00.771) 0:01:39.863 ***** 2026-01-28 00:57:02.831261 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.831268 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.831274 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.831281 | orchestrator | 2026-01-28 00:57:02.831288 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-01-28 00:57:02.831295 | orchestrator | Wednesday 28 January 2026 00:52:14 +0000 (0:00:01.548) 0:01:41.411 ***** 2026-01-28 00:57:02.831301 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.831308 | orchestrator | 2026-01-28 00:57:02.831315 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-01-28 00:57:02.831322 | orchestrator | Wednesday 28 January 2026 00:52:15 +0000 (0:00:00.752) 0:01:42.164 ***** 2026-01-28 00:57:02.831329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.831336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.831369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.831415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831441 | orchestrator | 2026-01-28 00:57:02.831448 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-01-28 00:57:02.831455 | orchestrator | Wednesday 28 January 2026 00:52:20 +0000 (0:00:04.960) 0:01:47.124 ***** 2026-01-28 00:57:02.831462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.831474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831502 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.831510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.831517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831549 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.831556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.831563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831584 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.831591 | orchestrator | 2026-01-28 00:57:02.831598 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-01-28 00:57:02.831605 | orchestrator | Wednesday 28 January 2026 00:52:21 +0000 (0:00:01.514) 0:01:48.639 ***** 2026-01-28 00:57:02.831612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-28 00:57:02.831622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-28 00:57:02.831634 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.831641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-28 00:57:02.831651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-28 00:57:02.831658 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.831665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-28 00:57:02.831672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-01-28 00:57:02.831678 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.831685 | orchestrator | 2026-01-28 00:57:02.831692 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-01-28 00:57:02.831699 | orchestrator | Wednesday 28 January 2026 00:52:23 +0000 (0:00:01.443) 0:01:50.083 ***** 2026-01-28 00:57:02.831705 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.831712 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.831719 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.831725 | orchestrator | 2026-01-28 00:57:02.831732 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-01-28 00:57:02.831739 | orchestrator | Wednesday 28 January 2026 00:52:24 +0000 (0:00:01.591) 0:01:51.675 ***** 2026-01-28 00:57:02.831746 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.831752 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.831759 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.831766 | orchestrator | 2026-01-28 00:57:02.831773 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-01-28 00:57:02.831779 | orchestrator | Wednesday 28 January 2026 00:52:27 +0000 (0:00:02.333) 0:01:54.008 ***** 2026-01-28 00:57:02.831786 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.831793 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.831800 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.831807 | orchestrator | 2026-01-28 00:57:02.831813 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-01-28 00:57:02.831820 | orchestrator | Wednesday 28 January 2026 00:52:27 +0000 (0:00:00.572) 0:01:54.581 ***** 2026-01-28 00:57:02.831827 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.831833 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.831840 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.831847 | orchestrator | 2026-01-28 00:57:02.831854 | orchestrator | TASK [include_role : designate] ************************************************ 2026-01-28 00:57:02.831861 | orchestrator | Wednesday 28 January 2026 00:52:27 +0000 (0:00:00.321) 0:01:54.902 ***** 2026-01-28 00:57:02.831867 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.831874 | orchestrator | 2026-01-28 00:57:02.831881 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-01-28 00:57:02.831888 | orchestrator | Wednesday 28 January 2026 00:52:28 +0000 (0:00:00.771) 0:01:55.673 ***** 2026-01-28 00:57:02.831895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-28 00:57:02.831910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-28 00:57:02.831969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.831993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-28 00:57:02.832035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-28 00:57:02.832043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-28 00:57:02.832099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-28 00:57:02.832106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832149 | orchestrator | 2026-01-28 00:57:02.832155 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-01-28 00:57:02.832162 | orchestrator | Wednesday 28 January 2026 00:52:34 +0000 (0:00:05.866) 0:02:01.540 ***** 2026-01-28 00:57:02.832175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-28 00:57:02.832182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-28 00:57:02.832189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832364 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.832372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-28 00:57:02.832379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-28 00:57:02.832394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-28 00:57:02.832428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-28 00:57:02.832445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832499 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.832508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.832533 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.832541 | orchestrator | 2026-01-28 00:57:02.832549 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-01-28 00:57:02.832557 | orchestrator | Wednesday 28 January 2026 00:52:35 +0000 (0:00:00.874) 0:02:02.414 ***** 2026-01-28 00:57:02.832566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-28 00:57:02.832575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-28 00:57:02.832583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-28 00:57:02.832591 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.832599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-28 00:57:02.832607 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.832615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-01-28 00:57:02.832623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-01-28 00:57:02.832631 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.832639 | orchestrator | 2026-01-28 00:57:02.832647 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-01-28 00:57:02.832655 | orchestrator | Wednesday 28 January 2026 00:52:36 +0000 (0:00:01.072) 0:02:03.487 ***** 2026-01-28 00:57:02.832663 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.832671 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.832679 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.832687 | orchestrator | 2026-01-28 00:57:02.832695 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-01-28 00:57:02.832703 | orchestrator | Wednesday 28 January 2026 00:52:38 +0000 (0:00:01.677) 0:02:05.164 ***** 2026-01-28 00:57:02.832711 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.832719 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.832727 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.832735 | orchestrator | 2026-01-28 00:57:02.832743 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-01-28 00:57:02.832751 | orchestrator | Wednesday 28 January 2026 00:52:39 +0000 (0:00:01.596) 0:02:06.761 ***** 2026-01-28 00:57:02.832759 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.832771 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.832779 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.832787 | orchestrator | 2026-01-28 00:57:02.832795 | orchestrator | TASK [include_role : glance] *************************************************** 2026-01-28 00:57:02.832803 | orchestrator | Wednesday 28 January 2026 00:52:40 +0000 (0:00:00.438) 0:02:07.200 ***** 2026-01-28 00:57:02.832811 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.832819 | orchestrator | 2026-01-28 00:57:02.832827 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-01-28 00:57:02.832835 | orchestrator | Wednesday 28 January 2026 00:52:40 +0000 (0:00:00.733) 0:02:07.934 ***** 2026-01-28 00:57:02.832848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-28 00:57:02.832864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-28 00:57:02.832883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-28 00:57:02.832899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-28 00:57:02.832949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-28 00:57:02.832969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-28 00:57:02.832980 | orchestrator | 2026-01-28 00:57:02.832989 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-01-28 00:57:02.832998 | orchestrator | Wednesday 28 January 2026 00:52:45 +0000 (0:00:04.914) 0:02:12.848 ***** 2026-01-28 00:57:02.833017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-28 00:57:02.833036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-28 00:57:02.833047 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.833062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-28 00:57:02.833078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-28 00:57:02.833087 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.833111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-28 00:57:02.833129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-01-28 00:57:02.833143 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.833152 | orchestrator | 2026-01-28 00:57:02.833160 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-01-28 00:57:02.833168 | orchestrator | Wednesday 28 January 2026 00:52:50 +0000 (0:00:04.646) 0:02:17.495 ***** 2026-01-28 00:57:02.833176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-28 00:57:02.833185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-28 00:57:02.833193 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.833202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-28 00:57:02.833214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-28 00:57:02.833227 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.833239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-28 00:57:02.833247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-01-28 00:57:02.833256 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.833264 | orchestrator | 2026-01-28 00:57:02.833272 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-01-28 00:57:02.833280 | orchestrator | Wednesday 28 January 2026 00:52:54 +0000 (0:00:03.469) 0:02:20.965 ***** 2026-01-28 00:57:02.833288 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.833296 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.833304 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.833311 | orchestrator | 2026-01-28 00:57:02.833320 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-01-28 00:57:02.833328 | orchestrator | Wednesday 28 January 2026 00:52:55 +0000 (0:00:01.326) 0:02:22.291 ***** 2026-01-28 00:57:02.833336 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.833343 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.833351 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.833359 | orchestrator | 2026-01-28 00:57:02.833367 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-01-28 00:57:02.833375 | orchestrator | Wednesday 28 January 2026 00:52:57 +0000 (0:00:02.006) 0:02:24.298 ***** 2026-01-28 00:57:02.833383 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.833391 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.833399 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.833407 | orchestrator | 2026-01-28 00:57:02.833415 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-01-28 00:57:02.833423 | orchestrator | Wednesday 28 January 2026 00:52:57 +0000 (0:00:00.597) 0:02:24.895 ***** 2026-01-28 00:57:02.833431 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.833439 | orchestrator | 2026-01-28 00:57:02.833446 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-01-28 00:57:02.833454 | orchestrator | Wednesday 28 January 2026 00:52:58 +0000 (0:00:00.873) 0:02:25.769 ***** 2026-01-28 00:57:02.833463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-28 00:57:02.833472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-28 00:57:02.833490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-28 00:57:02.833498 | orchestrator | 2026-01-28 00:57:02.833507 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-01-28 00:57:02.833514 | orchestrator | Wednesday 28 January 2026 00:53:02 +0000 (0:00:03.891) 0:02:29.660 ***** 2026-01-28 00:57:02.833527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-28 00:57:02.833535 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.833544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-28 00:57:02.833552 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.833561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-28 00:57:02.833569 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.833577 | orchestrator | 2026-01-28 00:57:02.833585 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-01-28 00:57:02.833598 | orchestrator | Wednesday 28 January 2026 00:53:03 +0000 (0:00:00.674) 0:02:30.335 ***** 2026-01-28 00:57:02.833606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-28 00:57:02.833614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-28 00:57:02.833622 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.833630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-28 00:57:02.833638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-28 00:57:02.833646 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.833654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-01-28 00:57:02.833666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-01-28 00:57:02.833674 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.833682 | orchestrator | 2026-01-28 00:57:02.833690 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-01-28 00:57:02.833698 | orchestrator | Wednesday 28 January 2026 00:53:04 +0000 (0:00:00.723) 0:02:31.058 ***** 2026-01-28 00:57:02.833706 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.833714 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.833721 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.833729 | orchestrator | 2026-01-28 00:57:02.833737 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-01-28 00:57:02.833745 | orchestrator | Wednesday 28 January 2026 00:53:05 +0000 (0:00:01.465) 0:02:32.523 ***** 2026-01-28 00:57:02.833757 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.833765 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.833773 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.833781 | orchestrator | 2026-01-28 00:57:02.833789 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-01-28 00:57:02.833796 | orchestrator | Wednesday 28 January 2026 00:53:07 +0000 (0:00:02.272) 0:02:34.796 ***** 2026-01-28 00:57:02.833804 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.833812 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.833820 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.833828 | orchestrator | 2026-01-28 00:57:02.833836 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-01-28 00:57:02.833844 | orchestrator | Wednesday 28 January 2026 00:53:08 +0000 (0:00:00.507) 0:02:35.303 ***** 2026-01-28 00:57:02.833852 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.833859 | orchestrator | 2026-01-28 00:57:02.833867 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-01-28 00:57:02.833875 | orchestrator | Wednesday 28 January 2026 00:53:09 +0000 (0:00:00.851) 0:02:36.155 ***** 2026-01-28 00:57:02.833884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-28 00:57:02.833909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-28 00:57:02.833936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-28 00:57:02.833951 | orchestrator | 2026-01-28 00:57:02.833959 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-01-28 00:57:02.833967 | orchestrator | Wednesday 28 January 2026 00:53:12 +0000 (0:00:03.146) 0:02:39.302 ***** 2026-01-28 00:57:02.833985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-28 00:57:02.834002 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.834015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-28 00:57:02.834071 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.834085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-28 00:57:02.834099 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.834108 | orchestrator | 2026-01-28 00:57:02.834116 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-01-28 00:57:02.834125 | orchestrator | Wednesday 28 January 2026 00:53:13 +0000 (0:00:01.338) 0:02:40.640 ***** 2026-01-28 00:57:02.834133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-28 00:57:02.834143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-28 00:57:02.834153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-28 00:57:02.834161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-28 00:57:02.834175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-28 00:57:02.834184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-28 00:57:02.834195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-28 00:57:02.834204 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.834212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-28 00:57:02.834221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-28 00:57:02.834233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-28 00:57:02.834241 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.834250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-28 00:57:02.834258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-28 00:57:02.834266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-01-28 00:57:02.834275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-01-28 00:57:02.834283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-01-28 00:57:02.834291 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.834298 | orchestrator | 2026-01-28 00:57:02.834307 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-01-28 00:57:02.834315 | orchestrator | Wednesday 28 January 2026 00:53:15 +0000 (0:00:01.310) 0:02:41.951 ***** 2026-01-28 00:57:02.834323 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.834330 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.834339 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.834347 | orchestrator | 2026-01-28 00:57:02.834372 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-01-28 00:57:02.834380 | orchestrator | Wednesday 28 January 2026 00:53:16 +0000 (0:00:01.363) 0:02:43.315 ***** 2026-01-28 00:57:02.834388 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.834396 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.834404 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.834412 | orchestrator | 2026-01-28 00:57:02.834420 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-01-28 00:57:02.834428 | orchestrator | Wednesday 28 January 2026 00:53:18 +0000 (0:00:02.057) 0:02:45.373 ***** 2026-01-28 00:57:02.834435 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.834443 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.834451 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.834459 | orchestrator | 2026-01-28 00:57:02.834467 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-01-28 00:57:02.834485 | orchestrator | Wednesday 28 January 2026 00:53:18 +0000 (0:00:00.313) 0:02:45.686 ***** 2026-01-28 00:57:02.834494 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.834514 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.834522 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.834530 | orchestrator | 2026-01-28 00:57:02.834573 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-01-28 00:57:02.834595 | orchestrator | Wednesday 28 January 2026 00:53:19 +0000 (0:00:00.542) 0:02:46.229 ***** 2026-01-28 00:57:02.834603 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.834627 | orchestrator | 2026-01-28 00:57:02.834640 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-01-28 00:57:02.834671 | orchestrator | Wednesday 28 January 2026 00:53:20 +0000 (0:00:00.941) 0:02:47.170 ***** 2026-01-28 00:57:02.834705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-28 00:57:02.834756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-28 00:57:02.834767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-28 00:57:02.834776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-28 00:57:02.834815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-28 00:57:02.834832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-28 00:57:02.834875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-28 00:57:02.834891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-28 00:57:02.834905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-28 00:57:02.834957 | orchestrator | 2026-01-28 00:57:02.834968 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-01-28 00:57:02.834976 | orchestrator | Wednesday 28 January 2026 00:53:24 +0000 (0:00:03.811) 0:02:50.982 ***** 2026-01-28 00:57:02.834991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-28 00:57:02.835011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-28 00:57:02.835020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-28 00:57:02.835028 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.835037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-28 00:57:02.835046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-28 00:57:02.835054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-28 00:57:02.835070 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.835088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-28 00:57:02.835097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-28 00:57:02.835105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-28 00:57:02.835113 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.835121 | orchestrator | 2026-01-28 00:57:02.835129 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-01-28 00:57:02.835137 | orchestrator | Wednesday 28 January 2026 00:53:24 +0000 (0:00:00.662) 0:02:51.645 ***** 2026-01-28 00:57:02.835145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-28 00:57:02.835154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-28 00:57:02.835163 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.835171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-28 00:57:02.835184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-28 00:57:02.835193 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.835201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-28 00:57:02.835214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-01-28 00:57:02.835222 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.835230 | orchestrator | 2026-01-28 00:57:02.835238 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-01-28 00:57:02.835246 | orchestrator | Wednesday 28 January 2026 00:53:25 +0000 (0:00:00.896) 0:02:52.541 ***** 2026-01-28 00:57:02.835254 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.835261 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.835269 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.835277 | orchestrator | 2026-01-28 00:57:02.835285 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-01-28 00:57:02.835293 | orchestrator | Wednesday 28 January 2026 00:53:27 +0000 (0:00:01.406) 0:02:53.947 ***** 2026-01-28 00:57:02.835301 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.835309 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.835320 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.835328 | orchestrator | 2026-01-28 00:57:02.835336 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-01-28 00:57:02.835344 | orchestrator | Wednesday 28 January 2026 00:53:29 +0000 (0:00:02.127) 0:02:56.074 ***** 2026-01-28 00:57:02.835352 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.835360 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.835367 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.835375 | orchestrator | 2026-01-28 00:57:02.835383 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-01-28 00:57:02.835391 | orchestrator | Wednesday 28 January 2026 00:53:29 +0000 (0:00:00.585) 0:02:56.660 ***** 2026-01-28 00:57:02.835399 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.835406 | orchestrator | 2026-01-28 00:57:02.835414 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-01-28 00:57:02.835422 | orchestrator | Wednesday 28 January 2026 00:53:30 +0000 (0:00:01.083) 0:02:57.743 ***** 2026-01-28 00:57:02.835430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 00:57:02.835439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.835456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 00:57:02.835470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.835482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 00:57:02.835490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.835503 | orchestrator | 2026-01-28 00:57:02.835511 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-01-28 00:57:02.835519 | orchestrator | Wednesday 28 January 2026 00:53:35 +0000 (0:00:04.621) 0:03:02.365 ***** 2026-01-28 00:57:02.835527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-28 00:57:02.835536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.835544 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.835561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-28 00:57:02.835570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.835578 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.835586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-28 00:57:02.835599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.835607 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.835616 | orchestrator | 2026-01-28 00:57:02.835623 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-01-28 00:57:02.835631 | orchestrator | Wednesday 28 January 2026 00:53:36 +0000 (0:00:01.041) 0:03:03.407 ***** 2026-01-28 00:57:02.835640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-28 00:57:02.835648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-28 00:57:02.835657 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.835669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-28 00:57:02.835678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-28 00:57:02.835686 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.835694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-01-28 00:57:02.835705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-01-28 00:57:02.835714 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.835721 | orchestrator | 2026-01-28 00:57:02.835729 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-01-28 00:57:02.835737 | orchestrator | Wednesday 28 January 2026 00:53:37 +0000 (0:00:01.141) 0:03:04.549 ***** 2026-01-28 00:57:02.835745 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.835753 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.835760 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.835768 | orchestrator | 2026-01-28 00:57:02.835776 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-01-28 00:57:02.835784 | orchestrator | Wednesday 28 January 2026 00:53:39 +0000 (0:00:01.533) 0:03:06.082 ***** 2026-01-28 00:57:02.835796 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.835804 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.835812 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.835820 | orchestrator | 2026-01-28 00:57:02.835828 | orchestrator | TASK [include_role : manila] *************************************************** 2026-01-28 00:57:02.835835 | orchestrator | Wednesday 28 January 2026 00:53:41 +0000 (0:00:02.288) 0:03:08.370 ***** 2026-01-28 00:57:02.835843 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.835851 | orchestrator | 2026-01-28 00:57:02.835859 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-01-28 00:57:02.835866 | orchestrator | Wednesday 28 January 2026 00:53:42 +0000 (0:00:01.364) 0:03:09.735 ***** 2026-01-28 00:57:02.835875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-28 00:57:02.835883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.835892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-28 00:57:02.835908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.835935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.835958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.835973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-01-28 00:57:02.835986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.836001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.836015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.836027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.836041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.836049 | orchestrator | 2026-01-28 00:57:02.836057 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-01-28 00:57:02.836066 | orchestrator | Wednesday 28 January 2026 00:53:46 +0000 (0:00:04.101) 0:03:13.836 ***** 2026-01-28 00:57:02.836074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-28 00:57:02.836082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.836091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.836107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.836120 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.836129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-28 00:57:02.836137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.836145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.836154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.836162 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.836174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-01-28 00:57:02.836191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.836200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.836209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.836217 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.836226 | orchestrator | 2026-01-28 00:57:02.836235 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-01-28 00:57:02.836244 | orchestrator | Wednesday 28 January 2026 00:53:47 +0000 (0:00:00.652) 0:03:14.489 ***** 2026-01-28 00:57:02.836253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-28 00:57:02.836262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-28 00:57:02.836271 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.836280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-28 00:57:02.836289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-28 00:57:02.836297 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.836307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-01-28 00:57:02.836315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-01-28 00:57:02.836324 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.836333 | orchestrator | 2026-01-28 00:57:02.836342 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-01-28 00:57:02.836359 | orchestrator | Wednesday 28 January 2026 00:53:48 +0000 (0:00:01.236) 0:03:15.726 ***** 2026-01-28 00:57:02.836367 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.836376 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.836385 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.836393 | orchestrator | 2026-01-28 00:57:02.836402 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-01-28 00:57:02.836412 | orchestrator | Wednesday 28 January 2026 00:53:50 +0000 (0:00:01.454) 0:03:17.181 ***** 2026-01-28 00:57:02.836420 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.836433 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.836442 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.836451 | orchestrator | 2026-01-28 00:57:02.836459 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-01-28 00:57:02.836468 | orchestrator | Wednesday 28 January 2026 00:53:52 +0000 (0:00:02.127) 0:03:19.308 ***** 2026-01-28 00:57:02.836477 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.836486 | orchestrator | 2026-01-28 00:57:02.836495 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-01-28 00:57:02.836503 | orchestrator | Wednesday 28 January 2026 00:53:53 +0000 (0:00:01.297) 0:03:20.605 ***** 2026-01-28 00:57:02.836513 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-28 00:57:02.836521 | orchestrator | 2026-01-28 00:57:02.836534 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-01-28 00:57:02.836542 | orchestrator | Wednesday 28 January 2026 00:53:56 +0000 (0:00:03.120) 0:03:23.726 ***** 2026-01-28 00:57:02.836552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-28 00:57:02.836562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-28 00:57:02.836576 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.836596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-28 00:57:02.836606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-28 00:57:02.836615 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.836624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-28 00:57:02.836639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-28 00:57:02.836648 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.836657 | orchestrator | 2026-01-28 00:57:02.836671 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-01-28 00:57:02.836680 | orchestrator | Wednesday 28 January 2026 00:53:59 +0000 (0:00:02.295) 0:03:26.022 ***** 2026-01-28 00:57:02.836693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-28 00:57:02.836703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-28 00:57:02.836712 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.836733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-28 00:57:02.836743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-28 00:57:02.836752 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.836762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-28 00:57:02.836776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-01-28 00:57:02.836785 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.836794 | orchestrator | 2026-01-28 00:57:02.836803 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-01-28 00:57:02.836812 | orchestrator | Wednesday 28 January 2026 00:54:01 +0000 (0:00:02.545) 0:03:28.567 ***** 2026-01-28 00:57:02.837050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-28 00:57:02.837091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-28 00:57:02.837102 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.837111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-28 00:57:02.837121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-28 00:57:02.837130 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.837139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-28 00:57:02.837156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-01-28 00:57:02.837166 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.837175 | orchestrator | 2026-01-28 00:57:02.837184 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-01-28 00:57:02.837192 | orchestrator | Wednesday 28 January 2026 00:54:04 +0000 (0:00:02.984) 0:03:31.552 ***** 2026-01-28 00:57:02.837201 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.837210 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.837219 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.837227 | orchestrator | 2026-01-28 00:57:02.837236 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-01-28 00:57:02.837245 | orchestrator | Wednesday 28 January 2026 00:54:06 +0000 (0:00:02.177) 0:03:33.729 ***** 2026-01-28 00:57:02.837254 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.837263 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.837271 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.837280 | orchestrator | 2026-01-28 00:57:02.837288 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-01-28 00:57:02.837297 | orchestrator | Wednesday 28 January 2026 00:54:08 +0000 (0:00:01.877) 0:03:35.607 ***** 2026-01-28 00:57:02.837306 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.837376 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.837389 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.837397 | orchestrator | 2026-01-28 00:57:02.837406 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-01-28 00:57:02.837414 | orchestrator | Wednesday 28 January 2026 00:54:09 +0000 (0:00:00.358) 0:03:35.966 ***** 2026-01-28 00:57:02.837423 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.837432 | orchestrator | 2026-01-28 00:57:02.837440 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-01-28 00:57:02.837449 | orchestrator | Wednesday 28 January 2026 00:54:10 +0000 (0:00:01.400) 0:03:37.366 ***** 2026-01-28 00:57:02.837485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-28 00:57:02.837497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-28 00:57:02.837513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-01-28 00:57:02.837523 | orchestrator | 2026-01-28 00:57:02.837531 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-01-28 00:57:02.837540 | orchestrator | Wednesday 28 January 2026 00:54:11 +0000 (0:00:01.568) 0:03:38.935 ***** 2026-01-28 00:57:02.837549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-28 00:57:02.837558 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.837626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-28 00:57:02.837639 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.837654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-01-28 00:57:02.837670 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.837679 | orchestrator | 2026-01-28 00:57:02.837688 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-01-28 00:57:02.837697 | orchestrator | Wednesday 28 January 2026 00:54:12 +0000 (0:00:00.456) 0:03:39.391 ***** 2026-01-28 00:57:02.837707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-28 00:57:02.837717 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.837726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-28 00:57:02.837736 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.837745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-01-28 00:57:02.837755 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.837764 | orchestrator | 2026-01-28 00:57:02.837773 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-01-28 00:57:02.837783 | orchestrator | Wednesday 28 January 2026 00:54:13 +0000 (0:00:00.968) 0:03:40.360 ***** 2026-01-28 00:57:02.837792 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.837801 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.837810 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.837819 | orchestrator | 2026-01-28 00:57:02.837828 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-01-28 00:57:02.837838 | orchestrator | Wednesday 28 January 2026 00:54:13 +0000 (0:00:00.471) 0:03:40.831 ***** 2026-01-28 00:57:02.837847 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.837856 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.837865 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.837874 | orchestrator | 2026-01-28 00:57:02.837883 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-01-28 00:57:02.837893 | orchestrator | Wednesday 28 January 2026 00:54:15 +0000 (0:00:01.399) 0:03:42.231 ***** 2026-01-28 00:57:02.837967 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.837979 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.837988 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.837997 | orchestrator | 2026-01-28 00:57:02.838005 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-01-28 00:57:02.838014 | orchestrator | Wednesday 28 January 2026 00:54:15 +0000 (0:00:00.331) 0:03:42.562 ***** 2026-01-28 00:57:02.838052 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.838061 | orchestrator | 2026-01-28 00:57:02.838070 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-01-28 00:57:02.838078 | orchestrator | Wednesday 28 January 2026 00:54:17 +0000 (0:00:01.475) 0:03:44.037 ***** 2026-01-28 00:57:02.838157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 00:57:02.838188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 00:57:02.838218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-28 00:57:02.838321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-28 00:57:02.838352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-28 00:57:02.838441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-28 00:57:02.838451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 00:57:02.838468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 00:57:02.838477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-28 00:57:02.838567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-28 00:57:02.838599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-28 00:57:02.838608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-28 00:57:02.838668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-28 00:57:02.838727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-28 00:57:02.838791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 00:57:02.838807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-28 00:57:02.838829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-28 00:57:02.838855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-28 00:57:02.838870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-28 00:57:02.838951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-28 00:57:02.838969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.838987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 00:57:02.838995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-28 00:57:02.839060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-28 00:57:02.839100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-28 00:57:02.839109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-28 00:57:02.839117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-28 00:57:02.839192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-28 00:57:02.839204 | orchestrator | 2026-01-28 00:57:02.839212 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-01-28 00:57:02.839220 | orchestrator | Wednesday 28 January 2026 00:54:21 +0000 (0:00:04.295) 0:03:48.333 ***** 2026-01-28 00:57:02.839232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 00:57:02.839241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-28 00:57:02.839340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-28 00:57:02.839369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-28 00:57:02.839378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 00:57:02.839401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-28 00:57:02.839477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-28 00:57:02.839486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 00:57:02.839494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-28 00:57:02.839582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-28 00:57:02.839613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 00:57:02.839621 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.839641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-28 00:57:02.839707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-28 00:57:02.839751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-28 00:57:02.839818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-01-28 00:57:02.839846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 00:57:02.839879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.839887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-28 00:57:02.839982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-28 00:57:02.840003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-28 00:57:02.840012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-28 00:57:02.840021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.840038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.840047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 00:57:02.840121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-28 00:57:02.840140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.840148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-28 00:57:02.840163 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.840171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-01-28 00:57:02.840180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-01-28 00:57:02.840188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.840220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-01-28 00:57:02.840234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-01-28 00:57:02.840242 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.840251 | orchestrator | 2026-01-28 00:57:02.840259 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-01-28 00:57:02.840267 | orchestrator | Wednesday 28 January 2026 00:54:22 +0000 (0:00:01.485) 0:03:49.818 ***** 2026-01-28 00:57:02.840281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-28 00:57:02.840289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-28 00:57:02.840297 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.840305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-28 00:57:02.840314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-28 00:57:02.840321 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.840329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-01-28 00:57:02.840337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-01-28 00:57:02.840345 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.840353 | orchestrator | 2026-01-28 00:57:02.840361 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-01-28 00:57:02.840381 | orchestrator | Wednesday 28 January 2026 00:54:24 +0000 (0:00:02.036) 0:03:51.854 ***** 2026-01-28 00:57:02.840390 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.840398 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.840406 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.840414 | orchestrator | 2026-01-28 00:57:02.840422 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-01-28 00:57:02.840430 | orchestrator | Wednesday 28 January 2026 00:54:26 +0000 (0:00:01.429) 0:03:53.284 ***** 2026-01-28 00:57:02.840438 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.840445 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.840453 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.840461 | orchestrator | 2026-01-28 00:57:02.840469 | orchestrator | TASK [include_role : placement] ************************************************ 2026-01-28 00:57:02.840477 | orchestrator | Wednesday 28 January 2026 00:54:28 +0000 (0:00:02.006) 0:03:55.290 ***** 2026-01-28 00:57:02.840485 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.840493 | orchestrator | 2026-01-28 00:57:02.840501 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-01-28 00:57:02.840509 | orchestrator | Wednesday 28 January 2026 00:54:29 +0000 (0:00:01.206) 0:03:56.497 ***** 2026-01-28 00:57:02.840540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.840560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.840569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.840578 | orchestrator | 2026-01-28 00:57:02.840586 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-01-28 00:57:02.840593 | orchestrator | Wednesday 28 January 2026 00:54:33 +0000 (0:00:03.973) 0:04:00.470 ***** 2026-01-28 00:57:02.840601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.840610 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.840640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.840657 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.840669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.840677 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.840686 | orchestrator | 2026-01-28 00:57:02.840694 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-01-28 00:57:02.840701 | orchestrator | Wednesday 28 January 2026 00:54:34 +0000 (0:00:00.558) 0:04:01.029 ***** 2026-01-28 00:57:02.840709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-28 00:57:02.840717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-28 00:57:02.840726 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.840734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-28 00:57:02.840742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-28 00:57:02.840751 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.840760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-28 00:57:02.840770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-01-28 00:57:02.840779 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.840788 | orchestrator | 2026-01-28 00:57:02.840797 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-01-28 00:57:02.840806 | orchestrator | Wednesday 28 January 2026 00:54:34 +0000 (0:00:00.848) 0:04:01.878 ***** 2026-01-28 00:57:02.840815 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.840824 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.840833 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.840842 | orchestrator | 2026-01-28 00:57:02.840851 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-01-28 00:57:02.840860 | orchestrator | Wednesday 28 January 2026 00:54:36 +0000 (0:00:01.916) 0:04:03.794 ***** 2026-01-28 00:57:02.840869 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.840878 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.840887 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.840896 | orchestrator | 2026-01-28 00:57:02.840905 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-01-28 00:57:02.840967 | orchestrator | Wednesday 28 January 2026 00:54:38 +0000 (0:00:01.745) 0:04:05.540 ***** 2026-01-28 00:57:02.840978 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.840988 | orchestrator | 2026-01-28 00:57:02.840997 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-01-28 00:57:02.841006 | orchestrator | Wednesday 28 January 2026 00:54:40 +0000 (0:00:01.538) 0:04:07.079 ***** 2026-01-28 00:57:02.841047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.841059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.841068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.841078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.841091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.841122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.841136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.841146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.841154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.841162 | orchestrator | 2026-01-28 00:57:02.841175 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-01-28 00:57:02.841183 | orchestrator | Wednesday 28 January 2026 00:54:44 +0000 (0:00:04.127) 0:04:11.206 ***** 2026-01-28 00:57:02.841214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.841228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.841237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.841245 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.841254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.841271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.841280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.841288 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.841322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.841333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.841341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.841349 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.841357 | orchestrator | 2026-01-28 00:57:02.841365 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-01-28 00:57:02.841378 | orchestrator | Wednesday 28 January 2026 00:54:45 +0000 (0:00:01.307) 0:04:12.514 ***** 2026-01-28 00:57:02.841386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-28 00:57:02.841395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-28 00:57:02.841403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-28 00:57:02.841411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-28 00:57:02.841419 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.841427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-28 00:57:02.841435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-28 00:57:02.841444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-28 00:57:02.841474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-28 00:57:02.841484 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.841490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-28 00:57:02.841500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-01-28 00:57:02.841508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-28 00:57:02.841514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-01-28 00:57:02.841521 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.841528 | orchestrator | 2026-01-28 00:57:02.841534 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-01-28 00:57:02.841541 | orchestrator | Wednesday 28 January 2026 00:54:46 +0000 (0:00:01.006) 0:04:13.521 ***** 2026-01-28 00:57:02.841548 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.841555 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.841561 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.841568 | orchestrator | 2026-01-28 00:57:02.841574 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-01-28 00:57:02.841581 | orchestrator | Wednesday 28 January 2026 00:54:47 +0000 (0:00:01.395) 0:04:14.916 ***** 2026-01-28 00:57:02.841588 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.841599 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.841605 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.841612 | orchestrator | 2026-01-28 00:57:02.841618 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-01-28 00:57:02.841625 | orchestrator | Wednesday 28 January 2026 00:54:50 +0000 (0:00:02.071) 0:04:16.988 ***** 2026-01-28 00:57:02.841632 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.841638 | orchestrator | 2026-01-28 00:57:02.841645 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-01-28 00:57:02.841651 | orchestrator | Wednesday 28 January 2026 00:54:51 +0000 (0:00:01.584) 0:04:18.573 ***** 2026-01-28 00:57:02.841658 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-01-28 00:57:02.841665 | orchestrator | 2026-01-28 00:57:02.841671 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-01-28 00:57:02.841678 | orchestrator | Wednesday 28 January 2026 00:54:52 +0000 (0:00:00.827) 0:04:19.400 ***** 2026-01-28 00:57:02.841685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-28 00:57:02.841692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-28 00:57:02.841700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-01-28 00:57:02.841707 | orchestrator | 2026-01-28 00:57:02.841713 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-01-28 00:57:02.841738 | orchestrator | Wednesday 28 January 2026 00:54:56 +0000 (0:00:04.350) 0:04:23.751 ***** 2026-01-28 00:57:02.841746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-28 00:57:02.841753 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.841760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-28 00:57:02.841774 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.841830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-28 00:57:02.841845 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.841852 | orchestrator | 2026-01-28 00:57:02.841859 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-01-28 00:57:02.841865 | orchestrator | Wednesday 28 January 2026 00:54:57 +0000 (0:00:01.020) 0:04:24.771 ***** 2026-01-28 00:57:02.841872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-28 00:57:02.841880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-28 00:57:02.841887 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.841894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-28 00:57:02.841901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-28 00:57:02.841908 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.841930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-28 00:57:02.841939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-01-28 00:57:02.841946 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.841953 | orchestrator | 2026-01-28 00:57:02.841959 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-28 00:57:02.841966 | orchestrator | Wednesday 28 January 2026 00:54:59 +0000 (0:00:01.545) 0:04:26.317 ***** 2026-01-28 00:57:02.841973 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.841979 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.841986 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.841992 | orchestrator | 2026-01-28 00:57:02.841999 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-28 00:57:02.842005 | orchestrator | Wednesday 28 January 2026 00:55:01 +0000 (0:00:02.590) 0:04:28.908 ***** 2026-01-28 00:57:02.842012 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.842043 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.842050 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.842057 | orchestrator | 2026-01-28 00:57:02.842090 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-01-28 00:57:02.842098 | orchestrator | Wednesday 28 January 2026 00:55:05 +0000 (0:00:03.104) 0:04:32.013 ***** 2026-01-28 00:57:02.842104 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-01-28 00:57:02.842117 | orchestrator | 2026-01-28 00:57:02.842124 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-01-28 00:57:02.842131 | orchestrator | Wednesday 28 January 2026 00:55:06 +0000 (0:00:01.460) 0:04:33.473 ***** 2026-01-28 00:57:02.842141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-28 00:57:02.842149 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.842156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-28 00:57:02.842163 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.842169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-28 00:57:02.842176 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.842183 | orchestrator | 2026-01-28 00:57:02.842190 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-01-28 00:57:02.842196 | orchestrator | Wednesday 28 January 2026 00:55:07 +0000 (0:00:01.303) 0:04:34.777 ***** 2026-01-28 00:57:02.842203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-28 00:57:02.842210 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.842217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-28 00:57:02.842224 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.842231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-01-28 00:57:02.842242 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.842249 | orchestrator | 2026-01-28 00:57:02.842273 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-01-28 00:57:02.842281 | orchestrator | Wednesday 28 January 2026 00:55:09 +0000 (0:00:01.301) 0:04:36.078 ***** 2026-01-28 00:57:02.842288 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.842294 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.842301 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.842307 | orchestrator | 2026-01-28 00:57:02.842314 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-28 00:57:02.842321 | orchestrator | Wednesday 28 January 2026 00:55:11 +0000 (0:00:01.881) 0:04:37.959 ***** 2026-01-28 00:57:02.842327 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:57:02.842334 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:57:02.842341 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:57:02.842347 | orchestrator | 2026-01-28 00:57:02.842354 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-28 00:57:02.842364 | orchestrator | Wednesday 28 January 2026 00:55:13 +0000 (0:00:02.340) 0:04:40.300 ***** 2026-01-28 00:57:02.842371 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:57:02.842378 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:57:02.842384 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:57:02.842391 | orchestrator | 2026-01-28 00:57:02.842398 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-01-28 00:57:02.842404 | orchestrator | Wednesday 28 January 2026 00:55:16 +0000 (0:00:03.368) 0:04:43.668 ***** 2026-01-28 00:57:02.842411 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-01-28 00:57:02.842418 | orchestrator | 2026-01-28 00:57:02.842425 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-01-28 00:57:02.842431 | orchestrator | Wednesday 28 January 2026 00:55:17 +0000 (0:00:00.910) 0:04:44.579 ***** 2026-01-28 00:57:02.842438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-28 00:57:02.842445 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.842452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-28 00:57:02.842459 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.842466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-28 00:57:02.842478 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.842484 | orchestrator | 2026-01-28 00:57:02.842491 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-01-28 00:57:02.842497 | orchestrator | Wednesday 28 January 2026 00:55:19 +0000 (0:00:01.405) 0:04:45.985 ***** 2026-01-28 00:57:02.842504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-28 00:57:02.842511 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.842536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-28 00:57:02.842544 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.842554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-01-28 00:57:02.842561 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.842568 | orchestrator | 2026-01-28 00:57:02.842575 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-01-28 00:57:02.842582 | orchestrator | Wednesday 28 January 2026 00:55:20 +0000 (0:00:01.509) 0:04:47.494 ***** 2026-01-28 00:57:02.842588 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.842595 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.842602 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.842609 | orchestrator | 2026-01-28 00:57:02.842615 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-01-28 00:57:02.842622 | orchestrator | Wednesday 28 January 2026 00:55:22 +0000 (0:00:01.707) 0:04:49.202 ***** 2026-01-28 00:57:02.842629 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:57:02.842636 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:57:02.842642 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:57:02.842649 | orchestrator | 2026-01-28 00:57:02.842656 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-01-28 00:57:02.842662 | orchestrator | Wednesday 28 January 2026 00:55:24 +0000 (0:00:02.448) 0:04:51.651 ***** 2026-01-28 00:57:02.842669 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:57:02.842676 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:57:02.842682 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:57:02.842689 | orchestrator | 2026-01-28 00:57:02.842696 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-01-28 00:57:02.842702 | orchestrator | Wednesday 28 January 2026 00:55:28 +0000 (0:00:03.498) 0:04:55.149 ***** 2026-01-28 00:57:02.842709 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.842716 | orchestrator | 2026-01-28 00:57:02.842723 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-01-28 00:57:02.842735 | orchestrator | Wednesday 28 January 2026 00:55:29 +0000 (0:00:01.614) 0:04:56.763 ***** 2026-01-28 00:57:02.842742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.842749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-28 00:57:02.842758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-28 00:57:02.842789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.842797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-28 00:57:02.842804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-28 00:57:02.842816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.842823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-28 00:57:02.842830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-28 00:57:02.842856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.842867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.842875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-28 00:57:02.842887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-28 00:57:02.842894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-28 00:57:02.842901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.842908 | orchestrator | 2026-01-28 00:57:02.842930 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-01-28 00:57:02.842938 | orchestrator | Wednesday 28 January 2026 00:55:33 +0000 (0:00:03.743) 0:05:00.507 ***** 2026-01-28 00:57:02.842969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.842978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-28 00:57:02.842989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-28 00:57:02.842996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-28 00:57:02.843003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.843010 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.843034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.843048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-28 00:57:02.843055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-28 00:57:02.843067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-28 00:57:02.843074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.843081 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.843088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.843095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-28 00:57:02.843119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-28 00:57:02.843131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-28 00:57:02.843142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-28 00:57:02.843149 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.843156 | orchestrator | 2026-01-28 00:57:02.843163 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-01-28 00:57:02.843170 | orchestrator | Wednesday 28 January 2026 00:55:34 +0000 (0:00:00.712) 0:05:01.219 ***** 2026-01-28 00:57:02.843177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-28 00:57:02.843184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-28 00:57:02.843191 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.843197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-28 00:57:02.843204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-28 00:57:02.843211 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.843218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-28 00:57:02.843224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-01-28 00:57:02.843231 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.843238 | orchestrator | 2026-01-28 00:57:02.843244 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-01-28 00:57:02.843251 | orchestrator | Wednesday 28 January 2026 00:55:35 +0000 (0:00:01.207) 0:05:02.427 ***** 2026-01-28 00:57:02.843257 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.843264 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.843271 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.843278 | orchestrator | 2026-01-28 00:57:02.843284 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-01-28 00:57:02.843291 | orchestrator | Wednesday 28 January 2026 00:55:36 +0000 (0:00:01.432) 0:05:03.860 ***** 2026-01-28 00:57:02.843297 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.843304 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.843311 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.843317 | orchestrator | 2026-01-28 00:57:02.843324 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-01-28 00:57:02.843348 | orchestrator | Wednesday 28 January 2026 00:55:38 +0000 (0:00:02.055) 0:05:05.915 ***** 2026-01-28 00:57:02.843360 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.843367 | orchestrator | 2026-01-28 00:57:02.843374 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-01-28 00:57:02.843380 | orchestrator | Wednesday 28 January 2026 00:55:40 +0000 (0:00:01.346) 0:05:07.262 ***** 2026-01-28 00:57:02.843391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-28 00:57:02.843399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-28 00:57:02.843406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-28 00:57:02.843414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-28 00:57:02.843447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-28 00:57:02.843457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-28 00:57:02.843464 | orchestrator | 2026-01-28 00:57:02.843471 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-01-28 00:57:02.843478 | orchestrator | Wednesday 28 January 2026 00:55:45 +0000 (0:00:05.421) 0:05:12.683 ***** 2026-01-28 00:57:02.843485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-28 00:57:02.843493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-28 00:57:02.843504 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.843532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-28 00:57:02.843541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-28 00:57:02.843548 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.843555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-28 00:57:02.843563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-28 00:57:02.843574 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.843581 | orchestrator | 2026-01-28 00:57:02.843588 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-01-28 00:57:02.843595 | orchestrator | Wednesday 28 January 2026 00:55:46 +0000 (0:00:00.700) 0:05:13.384 ***** 2026-01-28 00:57:02.843619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-28 00:57:02.843627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-28 00:57:02.843637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-28 00:57:02.843644 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.843651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-28 00:57:02.843658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-28 00:57:02.843665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-28 00:57:02.843671 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.843678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-01-28 00:57:02.843685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-28 00:57:02.843692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-01-28 00:57:02.843698 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.843705 | orchestrator | 2026-01-28 00:57:02.843712 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-01-28 00:57:02.843718 | orchestrator | Wednesday 28 January 2026 00:55:47 +0000 (0:00:00.960) 0:05:14.344 ***** 2026-01-28 00:57:02.843725 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.843732 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.843739 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.843745 | orchestrator | 2026-01-28 00:57:02.843752 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-01-28 00:57:02.843758 | orchestrator | Wednesday 28 January 2026 00:55:48 +0000 (0:00:00.825) 0:05:15.170 ***** 2026-01-28 00:57:02.843765 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.843771 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.843783 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.843790 | orchestrator | 2026-01-28 00:57:02.843797 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-01-28 00:57:02.843803 | orchestrator | Wednesday 28 January 2026 00:55:49 +0000 (0:00:01.370) 0:05:16.541 ***** 2026-01-28 00:57:02.843810 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.843817 | orchestrator | 2026-01-28 00:57:02.843823 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-01-28 00:57:02.843830 | orchestrator | Wednesday 28 January 2026 00:55:51 +0000 (0:00:01.464) 0:05:18.005 ***** 2026-01-28 00:57:02.843837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-28 00:57:02.843863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 00:57:02.843875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.843882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.843889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 00:57:02.843897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-28 00:57:02.843910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 00:57:02.843964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 00:57:02.844024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-28 00:57:02.844031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 00:57:02.844044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 00:57:02.844086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-28 00:57:02.844094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-28 00:57:02.844101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-28 00:57:02.844126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-28 00:57:02.844139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-28 00:57:02.844146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-28 00:57:02.844171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-28 00:57:02.844182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-28 00:57:02.844192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-28 00:57:02.844216 | orchestrator | 2026-01-28 00:57:02.844223 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-01-28 00:57:02.844229 | orchestrator | Wednesday 28 January 2026 00:55:55 +0000 (0:00:04.624) 0:05:22.629 ***** 2026-01-28 00:57:02.844236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-28 00:57:02.844242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 00:57:02.844249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 00:57:02.844276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-28 00:57:02.844288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-28 00:57:02.844295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-28 00:57:02.844318 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.844327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-28 00:57:02.844334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 00:57:02.844344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 00:57:02.844364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-28 00:57:02.844377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-28 00:57:02.844384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-28 00:57:02.844411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-28 00:57:02.844418 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.844424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 00:57:02.844435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 00:57:02.844464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-28 00:57:02.844471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-01-28 00:57:02.844477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 00:57:02.844494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-28 00:57:02.844500 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.844510 | orchestrator | 2026-01-28 00:57:02.844520 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-01-28 00:57:02.844526 | orchestrator | Wednesday 28 January 2026 00:55:56 +0000 (0:00:01.280) 0:05:23.909 ***** 2026-01-28 00:57:02.844533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-28 00:57:02.844539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-28 00:57:02.844546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-28 00:57:02.844554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-28 00:57:02.844560 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.844566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-28 00:57:02.844573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-28 00:57:02.844579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-28 00:57:02.844586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-28 00:57:02.844592 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.844599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-01-28 00:57:02.844605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-01-28 00:57:02.844612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-28 00:57:02.844618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-01-28 00:57:02.844624 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.844631 | orchestrator | 2026-01-28 00:57:02.844637 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-01-28 00:57:02.844644 | orchestrator | Wednesday 28 January 2026 00:55:58 +0000 (0:00:01.091) 0:05:25.001 ***** 2026-01-28 00:57:02.844657 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.844663 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.844670 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.844676 | orchestrator | 2026-01-28 00:57:02.844682 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-01-28 00:57:02.844688 | orchestrator | Wednesday 28 January 2026 00:55:58 +0000 (0:00:00.444) 0:05:25.445 ***** 2026-01-28 00:57:02.844694 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.844701 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.844707 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.844713 | orchestrator | 2026-01-28 00:57:02.844720 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-01-28 00:57:02.844726 | orchestrator | Wednesday 28 January 2026 00:55:59 +0000 (0:00:01.422) 0:05:26.867 ***** 2026-01-28 00:57:02.844735 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.844741 | orchestrator | 2026-01-28 00:57:02.844748 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-01-28 00:57:02.844754 | orchestrator | Wednesday 28 January 2026 00:56:01 +0000 (0:00:01.834) 0:05:28.702 ***** 2026-01-28 00:57:02.844760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-28 00:57:02.844768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-28 00:57:02.844775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-01-28 00:57:02.844785 | orchestrator | 2026-01-28 00:57:02.844792 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-01-28 00:57:02.844798 | orchestrator | Wednesday 28 January 2026 00:56:04 +0000 (0:00:02.500) 0:05:31.203 ***** 2026-01-28 00:57:02.844811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-28 00:57:02.844818 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.844825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-28 00:57:02.844832 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.844839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-01-28 00:57:02.844845 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.844852 | orchestrator | 2026-01-28 00:57:02.844858 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-01-28 00:57:02.844864 | orchestrator | Wednesday 28 January 2026 00:56:04 +0000 (0:00:00.430) 0:05:31.633 ***** 2026-01-28 00:57:02.844874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-28 00:57:02.844881 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.844888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-28 00:57:02.844894 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.844901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-01-28 00:57:02.844907 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.844913 | orchestrator | 2026-01-28 00:57:02.844937 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-01-28 00:57:02.844943 | orchestrator | Wednesday 28 January 2026 00:56:05 +0000 (0:00:01.080) 0:05:32.714 ***** 2026-01-28 00:57:02.844949 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.844955 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.844961 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.844967 | orchestrator | 2026-01-28 00:57:02.844974 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-01-28 00:57:02.844983 | orchestrator | Wednesday 28 January 2026 00:56:06 +0000 (0:00:00.503) 0:05:33.218 ***** 2026-01-28 00:57:02.844990 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.844996 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.845002 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.845008 | orchestrator | 2026-01-28 00:57:02.845014 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-01-28 00:57:02.845020 | orchestrator | Wednesday 28 January 2026 00:56:07 +0000 (0:00:01.394) 0:05:34.612 ***** 2026-01-28 00:57:02.845027 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:57:02.845033 | orchestrator | 2026-01-28 00:57:02.845039 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-01-28 00:57:02.845045 | orchestrator | Wednesday 28 January 2026 00:56:09 +0000 (0:00:01.813) 0:05:36.425 ***** 2026-01-28 00:57:02.845055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.845062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.845073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.845083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.845094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.845101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-01-28 00:57:02.845107 | orchestrator | 2026-01-28 00:57:02.845113 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-01-28 00:57:02.845123 | orchestrator | Wednesday 28 January 2026 00:56:15 +0000 (0:00:06.407) 0:05:42.832 ***** 2026-01-28 00:57:02.845130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.845140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.845146 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.845156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.845163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.845173 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.845179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.845186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-01-28 00:57:02.845192 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.845199 | orchestrator | 2026-01-28 00:57:02.845205 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-01-28 00:57:02.845214 | orchestrator | Wednesday 28 January 2026 00:56:16 +0000 (0:00:00.683) 0:05:43.516 ***** 2026-01-28 00:57:02.845220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-28 00:57:02.845227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-28 00:57:02.845236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-28 00:57:02.845243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-28 00:57:02.845249 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.845256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-28 00:57:02.845262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-28 00:57:02.845268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-28 00:57:02.845279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-28 00:57:02.845286 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.845292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-28 00:57:02.845299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-01-28 00:57:02.845305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-28 00:57:02.845311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-01-28 00:57:02.845317 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.845324 | orchestrator | 2026-01-28 00:57:02.845330 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-01-28 00:57:02.845336 | orchestrator | Wednesday 28 January 2026 00:56:18 +0000 (0:00:01.769) 0:05:45.286 ***** 2026-01-28 00:57:02.845342 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.845348 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.845355 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.845361 | orchestrator | 2026-01-28 00:57:02.845367 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-01-28 00:57:02.845373 | orchestrator | Wednesday 28 January 2026 00:56:19 +0000 (0:00:01.292) 0:05:46.578 ***** 2026-01-28 00:57:02.845379 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.845385 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.845391 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.845398 | orchestrator | 2026-01-28 00:57:02.845404 | orchestrator | TASK [include_role : swift] **************************************************** 2026-01-28 00:57:02.845410 | orchestrator | Wednesday 28 January 2026 00:56:21 +0000 (0:00:02.124) 0:05:48.703 ***** 2026-01-28 00:57:02.845416 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.845422 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.845429 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.845435 | orchestrator | 2026-01-28 00:57:02.845441 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-01-28 00:57:02.845447 | orchestrator | Wednesday 28 January 2026 00:56:22 +0000 (0:00:00.324) 0:05:49.028 ***** 2026-01-28 00:57:02.845453 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.845459 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.845465 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.845471 | orchestrator | 2026-01-28 00:57:02.845478 | orchestrator | TASK [include_role : trove] **************************************************** 2026-01-28 00:57:02.845486 | orchestrator | Wednesday 28 January 2026 00:56:22 +0000 (0:00:00.326) 0:05:49.355 ***** 2026-01-28 00:57:02.845493 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.845499 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.845505 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.845511 | orchestrator | 2026-01-28 00:57:02.845518 | orchestrator | TASK [include_role : venus] **************************************************** 2026-01-28 00:57:02.845524 | orchestrator | Wednesday 28 January 2026 00:56:23 +0000 (0:00:00.669) 0:05:50.024 ***** 2026-01-28 00:57:02.845530 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.845536 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.845546 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.845552 | orchestrator | 2026-01-28 00:57:02.845558 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-01-28 00:57:02.845565 | orchestrator | Wednesday 28 January 2026 00:56:23 +0000 (0:00:00.340) 0:05:50.364 ***** 2026-01-28 00:57:02.845574 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.845580 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.845586 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.845592 | orchestrator | 2026-01-28 00:57:02.845598 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-01-28 00:57:02.845605 | orchestrator | Wednesday 28 January 2026 00:56:23 +0000 (0:00:00.329) 0:05:50.693 ***** 2026-01-28 00:57:02.845611 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.845617 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.845623 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.845629 | orchestrator | 2026-01-28 00:57:02.845635 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-01-28 00:57:02.845642 | orchestrator | Wednesday 28 January 2026 00:56:24 +0000 (0:00:00.870) 0:05:51.564 ***** 2026-01-28 00:57:02.845648 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:57:02.845654 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:57:02.845660 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:57:02.845666 | orchestrator | 2026-01-28 00:57:02.845672 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-01-28 00:57:02.845678 | orchestrator | Wednesday 28 January 2026 00:56:25 +0000 (0:00:00.706) 0:05:52.270 ***** 2026-01-28 00:57:02.845684 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:57:02.845691 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:57:02.845697 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:57:02.845703 | orchestrator | 2026-01-28 00:57:02.845709 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-01-28 00:57:02.845715 | orchestrator | Wednesday 28 January 2026 00:56:25 +0000 (0:00:00.351) 0:05:52.622 ***** 2026-01-28 00:57:02.845721 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:57:02.845727 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:57:02.845733 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:57:02.845740 | orchestrator | 2026-01-28 00:57:02.845746 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-01-28 00:57:02.845752 | orchestrator | Wednesday 28 January 2026 00:56:26 +0000 (0:00:00.848) 0:05:53.470 ***** 2026-01-28 00:57:02.845758 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:57:02.845764 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:57:02.845771 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:57:02.845777 | orchestrator | 2026-01-28 00:57:02.845783 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-01-28 00:57:02.845789 | orchestrator | Wednesday 28 January 2026 00:56:27 +0000 (0:00:01.124) 0:05:54.594 ***** 2026-01-28 00:57:02.845795 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:57:02.845801 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:57:02.845807 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:57:02.845813 | orchestrator | 2026-01-28 00:57:02.845820 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-01-28 00:57:02.845826 | orchestrator | Wednesday 28 January 2026 00:56:28 +0000 (0:00:00.780) 0:05:55.375 ***** 2026-01-28 00:57:02.845832 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.845838 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.845844 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.845851 | orchestrator | 2026-01-28 00:57:02.845857 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-01-28 00:57:02.845863 | orchestrator | Wednesday 28 January 2026 00:56:32 +0000 (0:00:04.212) 0:05:59.587 ***** 2026-01-28 00:57:02.845869 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:57:02.845875 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:57:02.845882 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:57:02.845888 | orchestrator | 2026-01-28 00:57:02.845898 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-01-28 00:57:02.845904 | orchestrator | Wednesday 28 January 2026 00:56:34 +0000 (0:00:01.727) 0:06:01.315 ***** 2026-01-28 00:57:02.845910 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.845958 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.845965 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.845971 | orchestrator | 2026-01-28 00:57:02.845978 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-01-28 00:57:02.845984 | orchestrator | Wednesday 28 January 2026 00:56:46 +0000 (0:00:12.504) 0:06:13.820 ***** 2026-01-28 00:57:02.845990 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:57:02.845996 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:57:02.846003 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:57:02.846009 | orchestrator | 2026-01-28 00:57:02.846035 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-01-28 00:57:02.846044 | orchestrator | Wednesday 28 January 2026 00:56:48 +0000 (0:00:01.131) 0:06:14.952 ***** 2026-01-28 00:57:02.846050 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:57:02.846056 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:57:02.846062 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:57:02.846069 | orchestrator | 2026-01-28 00:57:02.846075 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-01-28 00:57:02.846081 | orchestrator | Wednesday 28 January 2026 00:56:52 +0000 (0:00:04.122) 0:06:19.074 ***** 2026-01-28 00:57:02.846087 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.846094 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.846100 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.846106 | orchestrator | 2026-01-28 00:57:02.846113 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-01-28 00:57:02.846119 | orchestrator | Wednesday 28 January 2026 00:56:52 +0000 (0:00:00.420) 0:06:19.494 ***** 2026-01-28 00:57:02.846126 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.846137 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.846143 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.846149 | orchestrator | 2026-01-28 00:57:02.846156 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-01-28 00:57:02.846162 | orchestrator | Wednesday 28 January 2026 00:56:52 +0000 (0:00:00.373) 0:06:19.867 ***** 2026-01-28 00:57:02.846168 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.846174 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.846181 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.846187 | orchestrator | 2026-01-28 00:57:02.846193 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-01-28 00:57:02.846199 | orchestrator | Wednesday 28 January 2026 00:56:53 +0000 (0:00:00.803) 0:06:20.671 ***** 2026-01-28 00:57:02.846205 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.846212 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.846218 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.846224 | orchestrator | 2026-01-28 00:57:02.846230 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-01-28 00:57:02.846237 | orchestrator | Wednesday 28 January 2026 00:56:54 +0000 (0:00:00.376) 0:06:21.048 ***** 2026-01-28 00:57:02.846243 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.846249 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.846256 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.846262 | orchestrator | 2026-01-28 00:57:02.846268 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-01-28 00:57:02.846274 | orchestrator | Wednesday 28 January 2026 00:56:54 +0000 (0:00:00.374) 0:06:21.423 ***** 2026-01-28 00:57:02.846281 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:57:02.846287 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:57:02.846293 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:57:02.846299 | orchestrator | 2026-01-28 00:57:02.846305 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-01-28 00:57:02.846317 | orchestrator | Wednesday 28 January 2026 00:56:54 +0000 (0:00:00.372) 0:06:21.796 ***** 2026-01-28 00:57:02.846323 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:57:02.846329 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:57:02.846335 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:57:02.846342 | orchestrator | 2026-01-28 00:57:02.846348 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-01-28 00:57:02.846354 | orchestrator | Wednesday 28 January 2026 00:57:00 +0000 (0:00:05.158) 0:06:26.954 ***** 2026-01-28 00:57:02.846360 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:57:02.846366 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:57:02.846371 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:57:02.846376 | orchestrator | 2026-01-28 00:57:02.846382 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:57:02.846388 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-28 00:57:02.846394 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-28 00:57:02.846399 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-01-28 00:57:02.846405 | orchestrator | 2026-01-28 00:57:02.846410 | orchestrator | 2026-01-28 00:57:02.846415 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:57:02.846421 | orchestrator | Wednesday 28 January 2026 00:57:00 +0000 (0:00:00.826) 0:06:27.781 ***** 2026-01-28 00:57:02.846426 | orchestrator | =============================================================================== 2026-01-28 00:57:02.846432 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.50s 2026-01-28 00:57:02.846437 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.41s 2026-01-28 00:57:02.846443 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.87s 2026-01-28 00:57:02.846448 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.42s 2026-01-28 00:57:02.846453 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.16s 2026-01-28 00:57:02.846459 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.04s 2026-01-28 00:57:02.846464 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.96s 2026-01-28 00:57:02.846469 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.91s 2026-01-28 00:57:02.846475 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.86s 2026-01-28 00:57:02.846480 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 4.65s 2026-01-28 00:57:02.846486 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.62s 2026-01-28 00:57:02.846491 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.62s 2026-01-28 00:57:02.846496 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.42s 2026-01-28 00:57:02.846502 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.35s 2026-01-28 00:57:02.846507 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.30s 2026-01-28 00:57:02.846512 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.21s 2026-01-28 00:57:02.846518 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.13s 2026-01-28 00:57:02.846556 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.12s 2026-01-28 00:57:02.846568 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.10s 2026-01-28 00:57:02.846573 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.07s 2026-01-28 00:57:02.846583 | orchestrator | 2026-01-28 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:05.866437 | orchestrator | 2026-01-28 00:57:05 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:57:05.866740 | orchestrator | 2026-01-28 00:57:05 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:05.868019 | orchestrator | 2026-01-28 00:57:05 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:57:05.868043 | orchestrator | 2026-01-28 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:08.901303 | orchestrator | 2026-01-28 00:57:08 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:57:08.903720 | orchestrator | 2026-01-28 00:57:08 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:08.906180 | orchestrator | 2026-01-28 00:57:08 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:57:08.906211 | orchestrator | 2026-01-28 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:11.944042 | orchestrator | 2026-01-28 00:57:11 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:57:11.945722 | orchestrator | 2026-01-28 00:57:11 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:11.946651 | orchestrator | 2026-01-28 00:57:11 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:57:11.946697 | orchestrator | 2026-01-28 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:14.989851 | orchestrator | 2026-01-28 00:57:14 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:57:14.989981 | orchestrator | 2026-01-28 00:57:14 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:14.990706 | orchestrator | 2026-01-28 00:57:14 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:57:14.990731 | orchestrator | 2026-01-28 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:18.041014 | orchestrator | 2026-01-28 00:57:18 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:57:18.041526 | orchestrator | 2026-01-28 00:57:18 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:18.044047 | orchestrator | 2026-01-28 00:57:18 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:57:18.044075 | orchestrator | 2026-01-28 00:57:18 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:21.080265 | orchestrator | 2026-01-28 00:57:21 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:57:21.080869 | orchestrator | 2026-01-28 00:57:21 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:21.081722 | orchestrator | 2026-01-28 00:57:21 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:57:21.081760 | orchestrator | 2026-01-28 00:57:21 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:24.300067 | orchestrator | 2026-01-28 00:57:24 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:57:24.300528 | orchestrator | 2026-01-28 00:57:24 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:24.301197 | orchestrator | 2026-01-28 00:57:24 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:57:24.301228 | orchestrator | 2026-01-28 00:57:24 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:27.333240 | orchestrator | 2026-01-28 00:57:27 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:57:27.334287 | orchestrator | 2026-01-28 00:57:27 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:27.335020 | orchestrator | 2026-01-28 00:57:27 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:57:27.335087 | orchestrator | 2026-01-28 00:57:27 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:30.369878 | orchestrator | 2026-01-28 00:57:30 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:57:30.370236 | orchestrator | 2026-01-28 00:57:30 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:30.370928 | orchestrator | 2026-01-28 00:57:30 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:57:30.370948 | orchestrator | 2026-01-28 00:57:30 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:33.402316 | orchestrator | 2026-01-28 00:57:33 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:57:33.403476 | orchestrator | 2026-01-28 00:57:33 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:33.404426 | orchestrator | 2026-01-28 00:57:33 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:57:33.404452 | orchestrator | 2026-01-28 00:57:33 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:36.449246 | orchestrator | 2026-01-28 00:57:36 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:57:36.449636 | orchestrator | 2026-01-28 00:57:36 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:36.450482 | orchestrator | 2026-01-28 00:57:36 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:57:36.450519 | orchestrator | 2026-01-28 00:57:36 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:39.481476 | orchestrator | 2026-01-28 00:57:39 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:57:39.482955 | orchestrator | 2026-01-28 00:57:39 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:39.485448 | orchestrator | 2026-01-28 00:57:39 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:57:39.485525 | orchestrator | 2026-01-28 00:57:39 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:42.524321 | orchestrator | 2026-01-28 00:57:42 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:57:42.525064 | orchestrator | 2026-01-28 00:57:42 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:42.526286 | orchestrator | 2026-01-28 00:57:42 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:57:42.526391 | orchestrator | 2026-01-28 00:57:42 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:45.571188 | orchestrator | 2026-01-28 00:57:45 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:57:45.573386 | orchestrator | 2026-01-28 00:57:45 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:45.575551 | orchestrator | 2026-01-28 00:57:45 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:57:45.575848 | orchestrator | 2026-01-28 00:57:45 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:48.619753 | orchestrator | 2026-01-28 00:57:48 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:57:48.622271 | orchestrator | 2026-01-28 00:57:48 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:48.625546 | orchestrator | 2026-01-28 00:57:48 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:57:48.625585 | orchestrator | 2026-01-28 00:57:48 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:51.668050 | orchestrator | 2026-01-28 00:57:51 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:57:51.668563 | orchestrator | 2026-01-28 00:57:51 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:51.669480 | orchestrator | 2026-01-28 00:57:51 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:57:51.669515 | orchestrator | 2026-01-28 00:57:51 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:54.723016 | orchestrator | 2026-01-28 00:57:54 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:57:54.724446 | orchestrator | 2026-01-28 00:57:54 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:54.726601 | orchestrator | 2026-01-28 00:57:54 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:57:54.726640 | orchestrator | 2026-01-28 00:57:54 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:57:57.771085 | orchestrator | 2026-01-28 00:57:57 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:57:57.771363 | orchestrator | 2026-01-28 00:57:57 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:57:57.774108 | orchestrator | 2026-01-28 00:57:57 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:57:57.774282 | orchestrator | 2026-01-28 00:57:57 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:00.824546 | orchestrator | 2026-01-28 00:58:00 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:00.827169 | orchestrator | 2026-01-28 00:58:00 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:00.829605 | orchestrator | 2026-01-28 00:58:00 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:00.830159 | orchestrator | 2026-01-28 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:03.879596 | orchestrator | 2026-01-28 00:58:03 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:03.882178 | orchestrator | 2026-01-28 00:58:03 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:03.884681 | orchestrator | 2026-01-28 00:58:03 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:03.884714 | orchestrator | 2026-01-28 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:06.927980 | orchestrator | 2026-01-28 00:58:06 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:06.928638 | orchestrator | 2026-01-28 00:58:06 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:06.930690 | orchestrator | 2026-01-28 00:58:06 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:06.930998 | orchestrator | 2026-01-28 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:09.986397 | orchestrator | 2026-01-28 00:58:09 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:09.988098 | orchestrator | 2026-01-28 00:58:09 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:09.992337 | orchestrator | 2026-01-28 00:58:09 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:09.992803 | orchestrator | 2026-01-28 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:13.051034 | orchestrator | 2026-01-28 00:58:13 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:13.052856 | orchestrator | 2026-01-28 00:58:13 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:13.055199 | orchestrator | 2026-01-28 00:58:13 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:13.055237 | orchestrator | 2026-01-28 00:58:13 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:16.097742 | orchestrator | 2026-01-28 00:58:16 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:16.100971 | orchestrator | 2026-01-28 00:58:16 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:16.102793 | orchestrator | 2026-01-28 00:58:16 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:16.102907 | orchestrator | 2026-01-28 00:58:16 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:19.140108 | orchestrator | 2026-01-28 00:58:19 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:19.140215 | orchestrator | 2026-01-28 00:58:19 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:19.140478 | orchestrator | 2026-01-28 00:58:19 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:19.140594 | orchestrator | 2026-01-28 00:58:19 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:22.193987 | orchestrator | 2026-01-28 00:58:22 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:22.196835 | orchestrator | 2026-01-28 00:58:22 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:22.199395 | orchestrator | 2026-01-28 00:58:22 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:22.199438 | orchestrator | 2026-01-28 00:58:22 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:25.251045 | orchestrator | 2026-01-28 00:58:25 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:25.251636 | orchestrator | 2026-01-28 00:58:25 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:25.252996 | orchestrator | 2026-01-28 00:58:25 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:25.253435 | orchestrator | 2026-01-28 00:58:25 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:28.294337 | orchestrator | 2026-01-28 00:58:28 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:28.296019 | orchestrator | 2026-01-28 00:58:28 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:28.298445 | orchestrator | 2026-01-28 00:58:28 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:28.298764 | orchestrator | 2026-01-28 00:58:28 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:31.349283 | orchestrator | 2026-01-28 00:58:31 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:31.350117 | orchestrator | 2026-01-28 00:58:31 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:31.351467 | orchestrator | 2026-01-28 00:58:31 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:31.351807 | orchestrator | 2026-01-28 00:58:31 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:34.403774 | orchestrator | 2026-01-28 00:58:34 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:34.405716 | orchestrator | 2026-01-28 00:58:34 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:34.407911 | orchestrator | 2026-01-28 00:58:34 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:34.407992 | orchestrator | 2026-01-28 00:58:34 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:37.459655 | orchestrator | 2026-01-28 00:58:37 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:37.460692 | orchestrator | 2026-01-28 00:58:37 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:37.462242 | orchestrator | 2026-01-28 00:58:37 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:37.462268 | orchestrator | 2026-01-28 00:58:37 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:40.509476 | orchestrator | 2026-01-28 00:58:40 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:40.510655 | orchestrator | 2026-01-28 00:58:40 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:40.512221 | orchestrator | 2026-01-28 00:58:40 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:40.512246 | orchestrator | 2026-01-28 00:58:40 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:43.565277 | orchestrator | 2026-01-28 00:58:43 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:43.569107 | orchestrator | 2026-01-28 00:58:43 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:43.570873 | orchestrator | 2026-01-28 00:58:43 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:43.570921 | orchestrator | 2026-01-28 00:58:43 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:46.619170 | orchestrator | 2026-01-28 00:58:46 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:46.621586 | orchestrator | 2026-01-28 00:58:46 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:46.625359 | orchestrator | 2026-01-28 00:58:46 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:46.625639 | orchestrator | 2026-01-28 00:58:46 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:49.667272 | orchestrator | 2026-01-28 00:58:49 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:49.668773 | orchestrator | 2026-01-28 00:58:49 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:49.670747 | orchestrator | 2026-01-28 00:58:49 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:49.670803 | orchestrator | 2026-01-28 00:58:49 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:52.718373 | orchestrator | 2026-01-28 00:58:52 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:52.719174 | orchestrator | 2026-01-28 00:58:52 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:52.721141 | orchestrator | 2026-01-28 00:58:52 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:52.721212 | orchestrator | 2026-01-28 00:58:52 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:55.765580 | orchestrator | 2026-01-28 00:58:55 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:55.767103 | orchestrator | 2026-01-28 00:58:55 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:55.769086 | orchestrator | 2026-01-28 00:58:55 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:55.769162 | orchestrator | 2026-01-28 00:58:55 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:58:58.814159 | orchestrator | 2026-01-28 00:58:58 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:58:58.815835 | orchestrator | 2026-01-28 00:58:58 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:58:58.818373 | orchestrator | 2026-01-28 00:58:58 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:58:58.818790 | orchestrator | 2026-01-28 00:58:58 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:01.868121 | orchestrator | 2026-01-28 00:59:01 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:01.870806 | orchestrator | 2026-01-28 00:59:01 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:59:01.874365 | orchestrator | 2026-01-28 00:59:01 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:59:01.874570 | orchestrator | 2026-01-28 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:04.920667 | orchestrator | 2026-01-28 00:59:04 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:04.922361 | orchestrator | 2026-01-28 00:59:04 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:59:04.923706 | orchestrator | 2026-01-28 00:59:04 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:59:04.923750 | orchestrator | 2026-01-28 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:07.973752 | orchestrator | 2026-01-28 00:59:07 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:07.975604 | orchestrator | 2026-01-28 00:59:07 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:59:07.977556 | orchestrator | 2026-01-28 00:59:07 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:59:07.977730 | orchestrator | 2026-01-28 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:11.018549 | orchestrator | 2026-01-28 00:59:11 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:11.019912 | orchestrator | 2026-01-28 00:59:11 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:59:11.021422 | orchestrator | 2026-01-28 00:59:11 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:59:11.021450 | orchestrator | 2026-01-28 00:59:11 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:14.067351 | orchestrator | 2026-01-28 00:59:14 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:14.070102 | orchestrator | 2026-01-28 00:59:14 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:59:14.071353 | orchestrator | 2026-01-28 00:59:14 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:59:14.071633 | orchestrator | 2026-01-28 00:59:14 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:17.122962 | orchestrator | 2026-01-28 00:59:17 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:17.125771 | orchestrator | 2026-01-28 00:59:17 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:59:17.128108 | orchestrator | 2026-01-28 00:59:17 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:59:17.128140 | orchestrator | 2026-01-28 00:59:17 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:20.177314 | orchestrator | 2026-01-28 00:59:20 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:20.177481 | orchestrator | 2026-01-28 00:59:20 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:59:20.178742 | orchestrator | 2026-01-28 00:59:20 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:59:20.178789 | orchestrator | 2026-01-28 00:59:20 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:23.225316 | orchestrator | 2026-01-28 00:59:23 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:23.228658 | orchestrator | 2026-01-28 00:59:23 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:59:23.230234 | orchestrator | 2026-01-28 00:59:23 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:59:23.230300 | orchestrator | 2026-01-28 00:59:23 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:26.282766 | orchestrator | 2026-01-28 00:59:26 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:26.285048 | orchestrator | 2026-01-28 00:59:26 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:59:26.287189 | orchestrator | 2026-01-28 00:59:26 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:59:26.287913 | orchestrator | 2026-01-28 00:59:26 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:29.337572 | orchestrator | 2026-01-28 00:59:29 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:29.339784 | orchestrator | 2026-01-28 00:59:29 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:59:29.342494 | orchestrator | 2026-01-28 00:59:29 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:59:29.342519 | orchestrator | 2026-01-28 00:59:29 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:32.384573 | orchestrator | 2026-01-28 00:59:32 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:32.386812 | orchestrator | 2026-01-28 00:59:32 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state STARTED 2026-01-28 00:59:32.389942 | orchestrator | 2026-01-28 00:59:32 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state STARTED 2026-01-28 00:59:32.389988 | orchestrator | 2026-01-28 00:59:32 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:35.439304 | orchestrator | 2026-01-28 00:59:35 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:35.442686 | orchestrator | 2026-01-28 00:59:35 | INFO  | Task c17fea7a-1ddc-4d82-852c-8a992702ad4e is in state SUCCESS 2026-01-28 00:59:35.444500 | orchestrator | 2026-01-28 00:59:35.444536 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-28 00:59:35.444544 | orchestrator | 2.16.14 2026-01-28 00:59:35.444552 | orchestrator | 2026-01-28 00:59:35.444559 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-01-28 00:59:35.444590 | orchestrator | 2026-01-28 00:59:35.444598 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-28 00:59:35.444605 | orchestrator | Wednesday 28 January 2026 00:48:17 +0000 (0:00:00.896) 0:00:00.897 ***** 2026-01-28 00:59:35.444612 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.444620 | orchestrator | 2026-01-28 00:59:35.444625 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-28 00:59:35.444629 | orchestrator | Wednesday 28 January 2026 00:48:18 +0000 (0:00:01.098) 0:00:01.996 ***** 2026-01-28 00:59:35.444633 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.444637 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.444641 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.444645 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.444649 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.444653 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.444656 | orchestrator | 2026-01-28 00:59:35.444660 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-28 00:59:35.444664 | orchestrator | Wednesday 28 January 2026 00:48:20 +0000 (0:00:01.449) 0:00:03.445 ***** 2026-01-28 00:59:35.444668 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.444671 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.444675 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.444679 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.444682 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.444686 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.444690 | orchestrator | 2026-01-28 00:59:35.444694 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-28 00:59:35.444697 | orchestrator | Wednesday 28 January 2026 00:48:21 +0000 (0:00:00.962) 0:00:04.408 ***** 2026-01-28 00:59:35.444701 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.444705 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.444708 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.444712 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.444716 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.444719 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.444723 | orchestrator | 2026-01-28 00:59:35.444727 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-28 00:59:35.444730 | orchestrator | Wednesday 28 January 2026 00:48:22 +0000 (0:00:01.026) 0:00:05.434 ***** 2026-01-28 00:59:35.444734 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.444738 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.444742 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.444745 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.444749 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.444753 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.444756 | orchestrator | 2026-01-28 00:59:35.444760 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-28 00:59:35.444764 | orchestrator | Wednesday 28 January 2026 00:48:23 +0000 (0:00:00.884) 0:00:06.319 ***** 2026-01-28 00:59:35.444768 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.444772 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.444776 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.444779 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.444783 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.444787 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.444790 | orchestrator | 2026-01-28 00:59:35.444794 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-28 00:59:35.444798 | orchestrator | Wednesday 28 January 2026 00:48:23 +0000 (0:00:00.592) 0:00:06.911 ***** 2026-01-28 00:59:35.444802 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.444818 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.444824 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.444830 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.444917 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.444927 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.444933 | orchestrator | 2026-01-28 00:59:35.444939 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-28 00:59:35.444946 | orchestrator | Wednesday 28 January 2026 00:48:24 +0000 (0:00:01.015) 0:00:07.927 ***** 2026-01-28 00:59:35.444952 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.444959 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.444965 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.444971 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.444978 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.444984 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.444990 | orchestrator | 2026-01-28 00:59:35.444997 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-28 00:59:35.445004 | orchestrator | Wednesday 28 January 2026 00:48:25 +0000 (0:00:00.666) 0:00:08.593 ***** 2026-01-28 00:59:35.445011 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.445017 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.445024 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.445030 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.445094 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.445103 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.445110 | orchestrator | 2026-01-28 00:59:35.445115 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-28 00:59:35.445120 | orchestrator | Wednesday 28 January 2026 00:48:26 +0000 (0:00:00.916) 0:00:09.510 ***** 2026-01-28 00:59:35.445125 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-28 00:59:35.445130 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-28 00:59:35.445134 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-28 00:59:35.445139 | orchestrator | 2026-01-28 00:59:35.445164 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-28 00:59:35.445169 | orchestrator | Wednesday 28 January 2026 00:48:27 +0000 (0:00:00.673) 0:00:10.184 ***** 2026-01-28 00:59:35.445173 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.445177 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.445182 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.445195 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.445199 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.445204 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.445208 | orchestrator | 2026-01-28 00:59:35.445212 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-28 00:59:35.445216 | orchestrator | Wednesday 28 January 2026 00:48:28 +0000 (0:00:01.776) 0:00:11.961 ***** 2026-01-28 00:59:35.445221 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-28 00:59:35.445225 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-28 00:59:35.445230 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-28 00:59:35.445234 | orchestrator | 2026-01-28 00:59:35.445238 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-28 00:59:35.445243 | orchestrator | Wednesday 28 January 2026 00:48:32 +0000 (0:00:03.202) 0:00:15.164 ***** 2026-01-28 00:59:35.445261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-28 00:59:35.445267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-28 00:59:35.445271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-28 00:59:35.445275 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.445280 | orchestrator | 2026-01-28 00:59:35.445284 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-28 00:59:35.445288 | orchestrator | Wednesday 28 January 2026 00:48:32 +0000 (0:00:00.537) 0:00:15.701 ***** 2026-01-28 00:59:35.445294 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.445307 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.445311 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.445316 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.445320 | orchestrator | 2026-01-28 00:59:35.445325 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-28 00:59:35.445329 | orchestrator | Wednesday 28 January 2026 00:48:33 +0000 (0:00:00.864) 0:00:16.565 ***** 2026-01-28 00:59:35.445335 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.445348 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.445352 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.445357 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.445361 | orchestrator | 2026-01-28 00:59:35.445365 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-28 00:59:35.445370 | orchestrator | Wednesday 28 January 2026 00:48:33 +0000 (0:00:00.331) 0:00:16.897 ***** 2026-01-28 00:59:35.445380 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-28 00:48:29.729095', 'end': '2026-01-28 00:48:29.991775', 'delta': '0:00:00.262680', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.445387 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-28 00:48:31.046787', 'end': '2026-01-28 00:48:31.271458', 'delta': '0:00:00.224671', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.445395 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-28 00:48:31.714673', 'end': '2026-01-28 00:48:31.977725', 'delta': '0:00:00.263052', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.445400 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.445404 | orchestrator | 2026-01-28 00:59:35.445409 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-28 00:59:35.445413 | orchestrator | Wednesday 28 January 2026 00:48:34 +0000 (0:00:00.161) 0:00:17.059 ***** 2026-01-28 00:59:35.445417 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.445421 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.445426 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.445430 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.445434 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.445438 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.445443 | orchestrator | 2026-01-28 00:59:35.445447 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-28 00:59:35.445451 | orchestrator | Wednesday 28 January 2026 00:48:35 +0000 (0:00:01.735) 0:00:18.794 ***** 2026-01-28 00:59:35.445455 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-28 00:59:35.445460 | orchestrator | 2026-01-28 00:59:35.445464 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-28 00:59:35.445468 | orchestrator | Wednesday 28 January 2026 00:48:36 +0000 (0:00:00.936) 0:00:19.731 ***** 2026-01-28 00:59:35.445473 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.445477 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.445483 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.445488 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.445492 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.445497 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.445501 | orchestrator | 2026-01-28 00:59:35.445505 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-28 00:59:35.445510 | orchestrator | Wednesday 28 January 2026 00:48:38 +0000 (0:00:01.927) 0:00:21.658 ***** 2026-01-28 00:59:35.445514 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.445518 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.445522 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.445527 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.445530 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.445534 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.445538 | orchestrator | 2026-01-28 00:59:35.445542 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-28 00:59:35.445545 | orchestrator | Wednesday 28 January 2026 00:48:40 +0000 (0:00:02.034) 0:00:23.693 ***** 2026-01-28 00:59:35.445549 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.445553 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.445557 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.445560 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.445564 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.445568 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.445571 | orchestrator | 2026-01-28 00:59:35.445575 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-28 00:59:35.445582 | orchestrator | Wednesday 28 January 2026 00:48:41 +0000 (0:00:01.339) 0:00:25.032 ***** 2026-01-28 00:59:35.445586 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.445590 | orchestrator | 2026-01-28 00:59:35.445593 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-28 00:59:35.445597 | orchestrator | Wednesday 28 January 2026 00:48:42 +0000 (0:00:00.390) 0:00:25.422 ***** 2026-01-28 00:59:35.445601 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.445605 | orchestrator | 2026-01-28 00:59:35.445608 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-28 00:59:35.445612 | orchestrator | Wednesday 28 January 2026 00:48:43 +0000 (0:00:00.743) 0:00:26.166 ***** 2026-01-28 00:59:35.445616 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.445620 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.445623 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.445630 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.445634 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.445637 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.445641 | orchestrator | 2026-01-28 00:59:35.445645 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-28 00:59:35.445649 | orchestrator | Wednesday 28 January 2026 00:48:44 +0000 (0:00:01.051) 0:00:27.217 ***** 2026-01-28 00:59:35.445652 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.445656 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.445660 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.445664 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.445667 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.445671 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.445675 | orchestrator | 2026-01-28 00:59:35.445679 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-28 00:59:35.445682 | orchestrator | Wednesday 28 January 2026 00:48:45 +0000 (0:00:01.034) 0:00:28.252 ***** 2026-01-28 00:59:35.445711 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.445716 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.445720 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.445723 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.445727 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.445731 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.445735 | orchestrator | 2026-01-28 00:59:35.445738 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-28 00:59:35.445742 | orchestrator | Wednesday 28 January 2026 00:48:46 +0000 (0:00:01.552) 0:00:29.805 ***** 2026-01-28 00:59:35.445746 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.445749 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.445753 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.445757 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.445761 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.445764 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.445768 | orchestrator | 2026-01-28 00:59:35.445772 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-28 00:59:35.445776 | orchestrator | Wednesday 28 January 2026 00:48:47 +0000 (0:00:00.849) 0:00:30.654 ***** 2026-01-28 00:59:35.445779 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.445783 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.445787 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.445791 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.445794 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.445798 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.445802 | orchestrator | 2026-01-28 00:59:35.445806 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-28 00:59:35.445809 | orchestrator | Wednesday 28 January 2026 00:48:48 +0000 (0:00:00.962) 0:00:31.616 ***** 2026-01-28 00:59:35.445813 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.445820 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.445824 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.445897 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.445902 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.445906 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.445909 | orchestrator | 2026-01-28 00:59:35.445913 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-28 00:59:35.445917 | orchestrator | Wednesday 28 January 2026 00:48:49 +0000 (0:00:01.150) 0:00:32.767 ***** 2026-01-28 00:59:35.445921 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.445924 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.445928 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.445932 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.445935 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.445939 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.445943 | orchestrator | 2026-01-28 00:59:35.445950 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-28 00:59:35.445953 | orchestrator | Wednesday 28 January 2026 00:48:50 +0000 (0:00:00.827) 0:00:33.594 ***** 2026-01-28 00:59:35.445958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--12f0ff1a--fab7--5a0a--bd83--09da1ae004fe-osd--block--12f0ff1a--fab7--5a0a--bd83--09da1ae004fe', 'dm-uuid-LVM-BuuylK42M4sAxBlhnDIIurvZHyCeVCsgXTItj8X84JRWTcMCSsGIbJh2LmIJreU4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.445964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cf0ea652--88a6--5aa8--929a--ed9131fd0cef-osd--block--cf0ea652--88a6--5aa8--929a--ed9131fd0cef', 'dm-uuid-LVM-EdsurwuGKZufF9XVsDJukuhsKhfu1ggWUyScsX0MF9OOWySHTJp1xCZzqLTe1NJD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.445971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.445975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.445979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.445983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.445992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.445996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part1', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part14', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part15', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part16', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--12f0ff1a--fab7--5a0a--bd83--09da1ae004fe-osd--block--12f0ff1a--fab7--5a0a--bd83--09da1ae004fe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4bPgF7-Vfr9-RZz2-tbWr-gSfa-6KPe-2MWuwN', 'scsi-0QEMU_QEMU_HARDDISK_1ddb64ab-bdcb-47d2-8e6e-2a5ea470c250', 'scsi-SQEMU_QEMU_HARDDISK_1ddb64ab-bdcb-47d2-8e6e-2a5ea470c250'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--cf0ea652--88a6--5aa8--929a--ed9131fd0cef-osd--block--cf0ea652--88a6--5aa8--929a--ed9131fd0cef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YK1El4-y6r6-KkY1-0cH0-prT4-ZF4x-ZXUFUA', 'scsi-0QEMU_QEMU_HARDDISK_b987a0a7-5a55-41a6-ab39-84821076e11d', 'scsi-SQEMU_QEMU_HARDDISK_b987a0a7-5a55-41a6-ab39-84821076e11d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac1a11ac-4fa1-43c2-9cb0-18dea5100f59', 'scsi-SQEMU_QEMU_HARDDISK_ac1a11ac-4fa1-43c2-9cb0-18dea5100f59'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-28-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e01643e5--7b60--5b49--bc8a--cfec0728964e-osd--block--e01643e5--7b60--5b49--bc8a--cfec0728964e', 'dm-uuid-LVM-NbzBTxqeS0v8OLHU0diczabMjdpA9hEuwC1CGwcm3OqXNzdGc6gIHp2bseol3nfc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446107 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ae2f77e7--beca--5176--aee2--b01d14f9def4-osd--block--ae2f77e7--beca--5176--aee2--b01d14f9def4', 'dm-uuid-LVM-kbDRFqdPNw52PykaapAiUHvnSFqt9fS0lVLSJThpjo8x8a1YfaF2PG22wa3khepJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446126 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446136 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.446140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part1', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part14', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part15', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part16', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446169 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e01643e5--7b60--5b49--bc8a--cfec0728964e-osd--block--e01643e5--7b60--5b49--bc8a--cfec0728964e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cU7pY2-cQuF-A1YO-e6Ud-t9dX-bbsF-IBAbek', 'scsi-0QEMU_QEMU_HARDDISK_73ccfbcc-79e9-4762-9da0-bda867b64772', 'scsi-SQEMU_QEMU_HARDDISK_73ccfbcc-79e9-4762-9da0-bda867b64772'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ae2f77e7--beca--5176--aee2--b01d14f9def4-osd--block--ae2f77e7--beca--5176--aee2--b01d14f9def4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yhnmRE-3dTL-6agf-Zw3c-v8PG-xEkE-5lXaUy', 'scsi-0QEMU_QEMU_HARDDISK_eefe2a4b-8f8c-4873-9530-ac9327ae5f1f', 'scsi-SQEMU_QEMU_HARDDISK_eefe2a4b-8f8c-4873-9530-ac9327ae5f1f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f01b7d80-edbf-4a4b-9318-1b6b20cc249d', 'scsi-SQEMU_QEMU_HARDDISK_f01b7d80-edbf-4a4b-9318-1b6b20cc249d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-28-00-03-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--60e20e1d--9b2b--5d4f--86ba--deb7f624d16e-osd--block--60e20e1d--9b2b--5d4f--86ba--deb7f624d16e', 'dm-uuid-LVM-0qDjjo5Cy36D4QNVUdVbEU60mRT1YMFZpWhKgewiIWD9xe7cCOUyxT7KnqUR0TaA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6a7f1cd8--9d71--5746--99fd--f6abb350b2d6-osd--block--6a7f1cd8--9d71--5746--99fd--f6abb350b2d6', 'dm-uuid-LVM-wMIkpdbksNS8xVnUKnZmuEyVN1ecWDtjKTwyJcc49GTQefJ93Aa8WyZajAMuD9iN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part1', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part14', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part15', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part16', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--60e20e1d--9b2b--5d4f--86ba--deb7f624d16e-osd--block--60e20e1d--9b2b--5d4f--86ba--deb7f624d16e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-t0OFdX-49rb-dGJy-sNiA-CXRc-i2Mk-NfstfW', 'scsi-0QEMU_QEMU_HARDDISK_3b42e8f1-b37c-4f60-8295-3641607d148d', 'scsi-SQEMU_QEMU_HARDDISK_3b42e8f1-b37c-4f60-8295-3641607d148d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6a7f1cd8--9d71--5746--99fd--f6abb350b2d6-osd--block--6a7f1cd8--9d71--5746--99fd--f6abb350b2d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hscqIg-8ApU-btZD-n3Qv-YKoO-fBEH-5Udamz', 'scsi-0QEMU_QEMU_HARDDISK_aa542874-8b0a-406e-9706-56af76962c37', 'scsi-SQEMU_QEMU_HARDDISK_aa542874-8b0a-406e-9706-56af76962c37'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446265 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28bc48d4-f1f3-45fc-825f-eba8771d5ae9', 'scsi-SQEMU_QEMU_HARDDISK_28bc48d4-f1f3-45fc-825f-eba8771d5ae9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-28-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446273 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.446279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b', 'scsi-SQEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b-part1', 'scsi-SQEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b-part14', 'scsi-SQEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b-part15', 'scsi-SQEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b-part16', 'scsi-SQEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-28-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446337 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.446341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446378 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.446386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835', 'scsi-SQEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835-part1', 'scsi-SQEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835-part14', 'scsi-SQEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835-part15', 'scsi-SQEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835-part16', 'scsi-SQEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-28-00-03-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446395 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.446402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 00:59:35.446445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701', 'scsi-SQEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701-part1', 'scsi-SQEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701-part14', 'scsi-SQEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701-part15', 'scsi-SQEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701-part16', 'scsi-SQEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-28-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 00:59:35.446459 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.446463 | orchestrator | 2026-01-28 00:59:35.446467 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-28 00:59:35.446471 | orchestrator | Wednesday 28 January 2026 00:48:52 +0000 (0:00:02.323) 0:00:35.918 ***** 2026-01-28 00:59:35.446475 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--12f0ff1a--fab7--5a0a--bd83--09da1ae004fe-osd--block--12f0ff1a--fab7--5a0a--bd83--09da1ae004fe', 'dm-uuid-LVM-BuuylK42M4sAxBlhnDIIurvZHyCeVCsgXTItj8X84JRWTcMCSsGIbJh2LmIJreU4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446480 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cf0ea652--88a6--5aa8--929a--ed9131fd0cef-osd--block--cf0ea652--88a6--5aa8--929a--ed9131fd0cef', 'dm-uuid-LVM-EdsurwuGKZufF9XVsDJukuhsKhfu1ggWUyScsX0MF9OOWySHTJp1xCZzqLTe1NJD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446486 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446491 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446498 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446504 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446508 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446512 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446516 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e01643e5--7b60--5b49--bc8a--cfec0728964e-osd--block--e01643e5--7b60--5b49--bc8a--cfec0728964e', 'dm-uuid-LVM-NbzBTxqeS0v8OLHU0diczabMjdpA9hEuwC1CGwcm3OqXNzdGc6gIHp2bseol3nfc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446522 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446529 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ae2f77e7--beca--5176--aee2--b01d14f9def4-osd--block--ae2f77e7--beca--5176--aee2--b01d14f9def4', 'dm-uuid-LVM-kbDRFqdPNw52PykaapAiUHvnSFqt9fS0lVLSJThpjo8x8a1YfaF2PG22wa3khepJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446536 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446540 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446547 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part1', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part14', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part15', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part16', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446555 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446562 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--12f0ff1a--fab7--5a0a--bd83--09da1ae004fe-osd--block--12f0ff1a--fab7--5a0a--bd83--09da1ae004fe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4bPgF7-Vfr9-RZz2-tbWr-gSfa-6KPe-2MWuwN', 'scsi-0QEMU_QEMU_HARDDISK_1ddb64ab-bdcb-47d2-8e6e-2a5ea470c250', 'scsi-SQEMU_QEMU_HARDDISK_1ddb64ab-bdcb-47d2-8e6e-2a5ea470c250'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446566 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446595 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446602 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--cf0ea652--88a6--5aa8--929a--ed9131fd0cef-osd--block--cf0ea652--88a6--5aa8--929a--ed9131fd0cef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YK1El4-y6r6-KkY1-0cH0-prT4-ZF4x-ZXUFUA', 'scsi-0QEMU_QEMU_HARDDISK_b987a0a7-5a55-41a6-ab39-84821076e11d', 'scsi-SQEMU_QEMU_HARDDISK_b987a0a7-5a55-41a6-ab39-84821076e11d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446610 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac1a11ac-4fa1-43c2-9cb0-18dea5100f59', 'scsi-SQEMU_QEMU_HARDDISK_ac1a11ac-4fa1-43c2-9cb0-18dea5100f59'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446617 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446621 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-28-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446630 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446634 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446641 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446677 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part1', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part14', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part15', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part16', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446683 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e01643e5--7b60--5b49--bc8a--cfec0728964e-osd--block--e01643e5--7b60--5b49--bc8a--cfec0728964e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cU7pY2-cQuF-A1YO-e6Ud-t9dX-bbsF-IBAbek', 'scsi-0QEMU_QEMU_HARDDISK_73ccfbcc-79e9-4762-9da0-bda867b64772', 'scsi-SQEMU_QEMU_HARDDISK_73ccfbcc-79e9-4762-9da0-bda867b64772'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446690 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ae2f77e7--beca--5176--aee2--b01d14f9def4-osd--block--ae2f77e7--beca--5176--aee2--b01d14f9def4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yhnmRE-3dTL-6agf-Zw3c-v8PG-xEkE-5lXaUy', 'scsi-0QEMU_QEMU_HARDDISK_eefe2a4b-8f8c-4873-9530-ac9327ae5f1f', 'scsi-SQEMU_QEMU_HARDDISK_eefe2a4b-8f8c-4873-9530-ac9327ae5f1f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446699 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f01b7d80-edbf-4a4b-9318-1b6b20cc249d', 'scsi-SQEMU_QEMU_HARDDISK_f01b7d80-edbf-4a4b-9318-1b6b20cc249d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446707 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-28-00-03-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446711 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--60e20e1d--9b2b--5d4f--86ba--deb7f624d16e-osd--block--60e20e1d--9b2b--5d4f--86ba--deb7f624d16e', 'dm-uuid-LVM-0qDjjo5Cy36D4QNVUdVbEU60mRT1YMFZpWhKgewiIWD9xe7cCOUyxT7KnqUR0TaA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446715 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6a7f1cd8--9d71--5746--99fd--f6abb350b2d6-osd--block--6a7f1cd8--9d71--5746--99fd--f6abb350b2d6', 'dm-uuid-LVM-wMIkpdbksNS8xVnUKnZmuEyVN1ecWDtjKTwyJcc49GTQefJ93Aa8WyZajAMuD9iN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446721 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446728 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446732 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446736 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.446743 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446747 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446806 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446811 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446821 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446825 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446829 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446838 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446842 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446846 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.446858 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part1', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part14', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part15', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part16', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.447603 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.447683 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--60e20e1d--9b2b--5d4f--86ba--deb7f624d16e-osd--block--60e20e1d--9b2b--5d4f--86ba--deb7f624d16e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-t0OFdX-49rb-dGJy-sNiA-CXRc-i2Mk-NfstfW', 'scsi-0QEMU_QEMU_HARDDISK_3b42e8f1-b37c-4f60-8295-3641607d148d', 'scsi-SQEMU_QEMU_HARDDISK_3b42e8f1-b37c-4f60-8295-3641607d148d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.447707 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.447748 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.447766 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6a7f1cd8--9d71--5746--99fd--f6abb350b2d6-osd--block--6a7f1cd8--9d71--5746--99fd--f6abb350b2d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hscqIg-8ApU-btZD-n3Qv-YKoO-fBEH-5Udamz', 'scsi-0QEMU_QEMU_HARDDISK_aa542874-8b0a-406e-9706-56af76962c37', 'scsi-SQEMU_QEMU_HARDDISK_aa542874-8b0a-406e-9706-56af76962c37'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.447797 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28bc48d4-f1f3-45fc-825f-eba8771d5ae9', 'scsi-SQEMU_QEMU_HARDDISK_28bc48d4-f1f3-45fc-825f-eba8771d5ae9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.447821 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b', 'scsi-SQEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b-part1', 'scsi-SQEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b-part14', 'scsi-SQEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b-part15', 'scsi-SQEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b-part16', 'scsi-SQEMU_QEMU_HARDDISK_c6f74680-d470-418b-9174-209ebb6c671b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.447847 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-28-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.447971 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-28-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.447987 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.447997 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448006 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448022 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448035 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448044 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448052 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448071 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448092 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448120 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835', 'scsi-SQEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835-part1', 'scsi-SQEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835-part14', 'scsi-SQEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835-part15', 'scsi-SQEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835-part16', 'scsi-SQEMU_QEMU_HARDDISK_0612b1ae-ec08-4713-9db6-5b0c740ef835-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448146 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-28-00-03-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448168 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.448182 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.448195 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.448208 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448222 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448244 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448259 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448273 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448361 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448394 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448409 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448437 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701', 'scsi-SQEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701-part1', 'scsi-SQEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701-part14', 'scsi-SQEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701-part15', 'scsi-SQEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701-part16', 'scsi-SQEMU_QEMU_HARDDISK_d8da0d7b-f707-4e6a-9b76-8e65b0275701-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448454 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-28-00-03-23-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 00:59:35.448468 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.448482 | orchestrator | 2026-01-28 00:59:35.448503 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-28 00:59:35.448516 | orchestrator | Wednesday 28 January 2026 00:48:54 +0000 (0:00:01.603) 0:00:37.522 ***** 2026-01-28 00:59:35.448529 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.448544 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.448557 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.448570 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.448584 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.448597 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.448610 | orchestrator | 2026-01-28 00:59:35.448624 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-28 00:59:35.448646 | orchestrator | Wednesday 28 January 2026 00:48:55 +0000 (0:00:01.464) 0:00:38.986 ***** 2026-01-28 00:59:35.448659 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.448673 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.448685 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.448698 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.448710 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.448723 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.448735 | orchestrator | 2026-01-28 00:59:35.448748 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-28 00:59:35.448762 | orchestrator | Wednesday 28 January 2026 00:48:56 +0000 (0:00:00.588) 0:00:39.574 ***** 2026-01-28 00:59:35.448776 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.448790 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.448804 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.448817 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.448830 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.448843 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.448857 | orchestrator | 2026-01-28 00:59:35.448921 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-28 00:59:35.448935 | orchestrator | Wednesday 28 January 2026 00:48:57 +0000 (0:00:01.168) 0:00:40.743 ***** 2026-01-28 00:59:35.448948 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.448961 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.448975 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.448988 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.449000 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.449013 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.449027 | orchestrator | 2026-01-28 00:59:35.449040 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-28 00:59:35.449053 | orchestrator | Wednesday 28 January 2026 00:48:58 +0000 (0:00:00.869) 0:00:41.612 ***** 2026-01-28 00:59:35.449066 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.449080 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.449094 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.449107 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.449120 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.449132 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.449145 | orchestrator | 2026-01-28 00:59:35.449159 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-28 00:59:35.449173 | orchestrator | Wednesday 28 January 2026 00:48:59 +0000 (0:00:01.093) 0:00:42.706 ***** 2026-01-28 00:59:35.449186 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.449199 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.449211 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.449224 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.449238 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.449251 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.449265 | orchestrator | 2026-01-28 00:59:35.449278 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-28 00:59:35.449291 | orchestrator | Wednesday 28 January 2026 00:49:00 +0000 (0:00:00.989) 0:00:43.695 ***** 2026-01-28 00:59:35.449304 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-28 00:59:35.449317 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-28 00:59:35.449338 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-28 00:59:35.449350 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-28 00:59:35.449364 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-28 00:59:35.449377 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-28 00:59:35.449390 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-28 00:59:35.449403 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-28 00:59:35.449416 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-01-28 00:59:35.449439 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-28 00:59:35.449452 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-28 00:59:35.449466 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-28 00:59:35.449479 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-01-28 00:59:35.449492 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-01-28 00:59:35.449505 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-28 00:59:35.449519 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-01-28 00:59:35.449532 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-01-28 00:59:35.449545 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-01-28 00:59:35.449558 | orchestrator | 2026-01-28 00:59:35.449571 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-28 00:59:35.449584 | orchestrator | Wednesday 28 January 2026 00:49:03 +0000 (0:00:02.702) 0:00:46.397 ***** 2026-01-28 00:59:35.449597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-28 00:59:35.449611 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-28 00:59:35.449624 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-28 00:59:35.449637 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.449650 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-28 00:59:35.449664 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-28 00:59:35.449676 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-28 00:59:35.449690 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.449703 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-28 00:59:35.449726 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-28 00:59:35.449740 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-28 00:59:35.449753 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.449766 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-28 00:59:35.449779 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-28 00:59:35.449793 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-28 00:59:35.449805 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.449818 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-01-28 00:59:35.449832 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-01-28 00:59:35.449844 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-01-28 00:59:35.449858 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.449893 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-01-28 00:59:35.449906 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-01-28 00:59:35.449919 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-01-28 00:59:35.449932 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.449945 | orchestrator | 2026-01-28 00:59:35.449958 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-28 00:59:35.449970 | orchestrator | Wednesday 28 January 2026 00:49:04 +0000 (0:00:00.959) 0:00:47.356 ***** 2026-01-28 00:59:35.449983 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.449997 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.450010 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.451084 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.451098 | orchestrator | 2026-01-28 00:59:35.451106 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-28 00:59:35.451122 | orchestrator | Wednesday 28 January 2026 00:49:05 +0000 (0:00:01.116) 0:00:48.473 ***** 2026-01-28 00:59:35.451130 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.451149 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.451157 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.451164 | orchestrator | 2026-01-28 00:59:35.451172 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-28 00:59:35.451180 | orchestrator | Wednesday 28 January 2026 00:49:06 +0000 (0:00:00.597) 0:00:49.070 ***** 2026-01-28 00:59:35.451188 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.451196 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.451204 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.451212 | orchestrator | 2026-01-28 00:59:35.451219 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-28 00:59:35.451227 | orchestrator | Wednesday 28 January 2026 00:49:06 +0000 (0:00:00.510) 0:00:49.581 ***** 2026-01-28 00:59:35.451235 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.451242 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.451250 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.451258 | orchestrator | 2026-01-28 00:59:35.451265 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-28 00:59:35.451273 | orchestrator | Wednesday 28 January 2026 00:49:07 +0000 (0:00:00.780) 0:00:50.362 ***** 2026-01-28 00:59:35.451281 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.451289 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.451296 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.451304 | orchestrator | 2026-01-28 00:59:35.451312 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-28 00:59:35.451326 | orchestrator | Wednesday 28 January 2026 00:49:07 +0000 (0:00:00.546) 0:00:50.909 ***** 2026-01-28 00:59:35.451334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-28 00:59:35.451342 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-28 00:59:35.451350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-28 00:59:35.451358 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.451365 | orchestrator | 2026-01-28 00:59:35.451373 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-28 00:59:35.451381 | orchestrator | Wednesday 28 January 2026 00:49:08 +0000 (0:00:00.463) 0:00:51.372 ***** 2026-01-28 00:59:35.451389 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-28 00:59:35.451397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-28 00:59:35.451404 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-28 00:59:35.451412 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.451420 | orchestrator | 2026-01-28 00:59:35.451428 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-28 00:59:35.451435 | orchestrator | Wednesday 28 January 2026 00:49:09 +0000 (0:00:00.736) 0:00:52.108 ***** 2026-01-28 00:59:35.451443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-28 00:59:35.451451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-28 00:59:35.451459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-28 00:59:35.451467 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.451474 | orchestrator | 2026-01-28 00:59:35.451482 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-28 00:59:35.451490 | orchestrator | Wednesday 28 January 2026 00:49:09 +0000 (0:00:00.515) 0:00:52.623 ***** 2026-01-28 00:59:35.451497 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.451505 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.451513 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.451520 | orchestrator | 2026-01-28 00:59:35.451528 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-28 00:59:35.451536 | orchestrator | Wednesday 28 January 2026 00:49:10 +0000 (0:00:00.531) 0:00:53.154 ***** 2026-01-28 00:59:35.451544 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-28 00:59:35.451552 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-28 00:59:35.451574 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-28 00:59:35.451582 | orchestrator | 2026-01-28 00:59:35.451590 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-28 00:59:35.451597 | orchestrator | Wednesday 28 January 2026 00:49:11 +0000 (0:00:01.747) 0:00:54.902 ***** 2026-01-28 00:59:35.451605 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-28 00:59:35.451613 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-28 00:59:35.451621 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-28 00:59:35.451629 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-28 00:59:35.451637 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-28 00:59:35.451644 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-28 00:59:35.451652 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-28 00:59:35.451660 | orchestrator | 2026-01-28 00:59:35.451668 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-28 00:59:35.451675 | orchestrator | Wednesday 28 January 2026 00:49:13 +0000 (0:00:01.313) 0:00:56.215 ***** 2026-01-28 00:59:35.451683 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-28 00:59:35.451691 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-28 00:59:35.451698 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-28 00:59:35.451706 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-28 00:59:35.451714 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-28 00:59:35.451722 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-28 00:59:35.451729 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-28 00:59:35.451737 | orchestrator | 2026-01-28 00:59:35.451745 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-28 00:59:35.451752 | orchestrator | Wednesday 28 January 2026 00:49:15 +0000 (0:00:02.350) 0:00:58.565 ***** 2026-01-28 00:59:35.451761 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-3, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.451770 | orchestrator | 2026-01-28 00:59:35.451778 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-28 00:59:35.451785 | orchestrator | Wednesday 28 January 2026 00:49:17 +0000 (0:00:01.598) 0:01:00.163 ***** 2026-01-28 00:59:35.451793 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.451801 | orchestrator | 2026-01-28 00:59:35.451809 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-28 00:59:35.451816 | orchestrator | Wednesday 28 January 2026 00:49:18 +0000 (0:00:01.555) 0:01:01.719 ***** 2026-01-28 00:59:35.451824 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.451832 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.451843 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.451851 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.451882 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.451890 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.451898 | orchestrator | 2026-01-28 00:59:35.451906 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-28 00:59:35.451913 | orchestrator | Wednesday 28 January 2026 00:49:20 +0000 (0:00:01.364) 0:01:03.083 ***** 2026-01-28 00:59:35.451921 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.451929 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.451943 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.451951 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.451958 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.451966 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.451974 | orchestrator | 2026-01-28 00:59:35.451982 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-28 00:59:35.451990 | orchestrator | Wednesday 28 January 2026 00:49:21 +0000 (0:00:01.221) 0:01:04.305 ***** 2026-01-28 00:59:35.451997 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.452005 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.452013 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.452020 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.452028 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.452036 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.452043 | orchestrator | 2026-01-28 00:59:35.452051 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-28 00:59:35.452058 | orchestrator | Wednesday 28 January 2026 00:49:22 +0000 (0:00:01.609) 0:01:05.915 ***** 2026-01-28 00:59:35.452066 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.452074 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.452082 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.452089 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.452097 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.452105 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.452112 | orchestrator | 2026-01-28 00:59:35.452120 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-28 00:59:35.452128 | orchestrator | Wednesday 28 January 2026 00:49:23 +0000 (0:00:01.058) 0:01:06.974 ***** 2026-01-28 00:59:35.452136 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.452144 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.452151 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.452159 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.452166 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.452180 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.452188 | orchestrator | 2026-01-28 00:59:35.452195 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-28 00:59:35.452203 | orchestrator | Wednesday 28 January 2026 00:49:25 +0000 (0:00:01.687) 0:01:08.661 ***** 2026-01-28 00:59:35.452211 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.452219 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.452226 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.452234 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.452242 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.452249 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.452257 | orchestrator | 2026-01-28 00:59:35.452264 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-28 00:59:35.452272 | orchestrator | Wednesday 28 January 2026 00:49:26 +0000 (0:00:00.567) 0:01:09.228 ***** 2026-01-28 00:59:35.452280 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.452287 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.452295 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.452303 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.452310 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.452318 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.452326 | orchestrator | 2026-01-28 00:59:35.452333 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-28 00:59:35.452341 | orchestrator | Wednesday 28 January 2026 00:49:26 +0000 (0:00:00.787) 0:01:10.016 ***** 2026-01-28 00:59:35.452349 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.452357 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.452364 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.452372 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.452379 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.452387 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.452402 | orchestrator | 2026-01-28 00:59:35.452410 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-28 00:59:35.452418 | orchestrator | Wednesday 28 January 2026 00:49:28 +0000 (0:00:01.078) 0:01:11.094 ***** 2026-01-28 00:59:35.452426 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.452434 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.452441 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.452449 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.452461 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.452474 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.452488 | orchestrator | 2026-01-28 00:59:35.452501 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-28 00:59:35.452515 | orchestrator | Wednesday 28 January 2026 00:49:30 +0000 (0:00:02.153) 0:01:13.248 ***** 2026-01-28 00:59:35.452528 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.452542 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.452555 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.452569 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.452583 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.452597 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.452611 | orchestrator | 2026-01-28 00:59:35.452625 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-28 00:59:35.452639 | orchestrator | Wednesday 28 January 2026 00:49:31 +0000 (0:00:01.347) 0:01:14.595 ***** 2026-01-28 00:59:35.452652 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.452667 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.452681 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.452695 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.452710 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.452724 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.452738 | orchestrator | 2026-01-28 00:59:35.452752 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-28 00:59:35.452765 | orchestrator | Wednesday 28 January 2026 00:49:32 +0000 (0:00:01.233) 0:01:15.829 ***** 2026-01-28 00:59:35.452780 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.452800 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.452815 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.452829 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.452843 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.452857 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.452889 | orchestrator | 2026-01-28 00:59:35.452902 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-28 00:59:35.452916 | orchestrator | Wednesday 28 January 2026 00:49:33 +0000 (0:00:00.878) 0:01:16.707 ***** 2026-01-28 00:59:35.452930 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.452943 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.452956 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.452970 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.452983 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.452998 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.453011 | orchestrator | 2026-01-28 00:59:35.453024 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-28 00:59:35.453037 | orchestrator | Wednesday 28 January 2026 00:49:35 +0000 (0:00:01.723) 0:01:18.430 ***** 2026-01-28 00:59:35.453051 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.453064 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.453077 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.453090 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.453104 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.453117 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.453130 | orchestrator | 2026-01-28 00:59:35.453143 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-28 00:59:35.453156 | orchestrator | Wednesday 28 January 2026 00:49:36 +0000 (0:00:01.295) 0:01:19.725 ***** 2026-01-28 00:59:35.453169 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.453191 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.453204 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.453217 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.453231 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.453244 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.453257 | orchestrator | 2026-01-28 00:59:35.453270 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-28 00:59:35.453283 | orchestrator | Wednesday 28 January 2026 00:49:38 +0000 (0:00:01.526) 0:01:21.251 ***** 2026-01-28 00:59:35.453296 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.453309 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.453323 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.453336 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.453356 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.453369 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.453382 | orchestrator | 2026-01-28 00:59:35.453395 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-28 00:59:35.453409 | orchestrator | Wednesday 28 January 2026 00:49:38 +0000 (0:00:00.579) 0:01:21.831 ***** 2026-01-28 00:59:35.453422 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.453435 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.453448 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.453460 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.453473 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.453486 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.453500 | orchestrator | 2026-01-28 00:59:35.453513 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-28 00:59:35.453527 | orchestrator | Wednesday 28 January 2026 00:49:39 +0000 (0:00:00.717) 0:01:22.549 ***** 2026-01-28 00:59:35.453540 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.453552 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.453565 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.453578 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.453592 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.453605 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.453618 | orchestrator | 2026-01-28 00:59:35.453631 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-28 00:59:35.453644 | orchestrator | Wednesday 28 January 2026 00:49:40 +0000 (0:00:00.590) 0:01:23.139 ***** 2026-01-28 00:59:35.453657 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.453670 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.453683 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.453696 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.453709 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.453722 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.453735 | orchestrator | 2026-01-28 00:59:35.453748 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-01-28 00:59:35.453761 | orchestrator | Wednesday 28 January 2026 00:49:41 +0000 (0:00:01.360) 0:01:24.500 ***** 2026-01-28 00:59:35.453775 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.453789 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.453801 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.453815 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.453828 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.453841 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.453854 | orchestrator | 2026-01-28 00:59:35.453973 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-01-28 00:59:35.453990 | orchestrator | Wednesday 28 January 2026 00:49:43 +0000 (0:00:01.690) 0:01:26.191 ***** 2026-01-28 00:59:35.454004 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.454061 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.454077 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.454090 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.454103 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.454130 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.454144 | orchestrator | 2026-01-28 00:59:35.454157 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-01-28 00:59:35.454170 | orchestrator | Wednesday 28 January 2026 00:49:47 +0000 (0:00:04.140) 0:01:30.331 ***** 2026-01-28 00:59:35.454184 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.454197 | orchestrator | 2026-01-28 00:59:35.454210 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-01-28 00:59:35.454223 | orchestrator | Wednesday 28 January 2026 00:49:48 +0000 (0:00:01.642) 0:01:31.973 ***** 2026-01-28 00:59:35.454234 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.454251 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.454262 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.454273 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.454284 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.454295 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.454306 | orchestrator | 2026-01-28 00:59:35.454317 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-01-28 00:59:35.454328 | orchestrator | Wednesday 28 January 2026 00:49:49 +0000 (0:00:00.814) 0:01:32.788 ***** 2026-01-28 00:59:35.454340 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.454351 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.454362 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.454373 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.454384 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.454395 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.454406 | orchestrator | 2026-01-28 00:59:35.454417 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-01-28 00:59:35.454429 | orchestrator | Wednesday 28 January 2026 00:49:51 +0000 (0:00:01.696) 0:01:34.484 ***** 2026-01-28 00:59:35.454440 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-28 00:59:35.454451 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-28 00:59:35.454462 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-28 00:59:35.454473 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-28 00:59:35.454484 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-28 00:59:35.454495 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-28 00:59:35.454506 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-28 00:59:35.454518 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-01-28 00:59:35.454529 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-28 00:59:35.454540 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-28 00:59:35.454569 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-28 00:59:35.454581 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-01-28 00:59:35.454593 | orchestrator | 2026-01-28 00:59:35.454605 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-01-28 00:59:35.454616 | orchestrator | Wednesday 28 January 2026 00:49:52 +0000 (0:00:01.543) 0:01:36.027 ***** 2026-01-28 00:59:35.454628 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.454639 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.454651 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.454661 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.454674 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.454685 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.454701 | orchestrator | 2026-01-28 00:59:35.454712 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-01-28 00:59:35.454729 | orchestrator | Wednesday 28 January 2026 00:49:54 +0000 (0:00:01.701) 0:01:37.729 ***** 2026-01-28 00:59:35.454740 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.454751 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.454762 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.454771 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.454782 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.454792 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.454803 | orchestrator | 2026-01-28 00:59:35.454813 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-01-28 00:59:35.454826 | orchestrator | Wednesday 28 January 2026 00:49:55 +0000 (0:00:00.748) 0:01:38.478 ***** 2026-01-28 00:59:35.454836 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.454849 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.454908 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.454922 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.454933 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.454942 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.454953 | orchestrator | 2026-01-28 00:59:35.454964 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-01-28 00:59:35.454975 | orchestrator | Wednesday 28 January 2026 00:49:56 +0000 (0:00:01.171) 0:01:39.649 ***** 2026-01-28 00:59:35.454985 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.454995 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.455006 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.455016 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.455026 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.455036 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.455047 | orchestrator | 2026-01-28 00:59:35.455057 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-01-28 00:59:35.455068 | orchestrator | Wednesday 28 January 2026 00:49:57 +0000 (0:00:00.586) 0:01:40.236 ***** 2026-01-28 00:59:35.455079 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.455089 | orchestrator | 2026-01-28 00:59:35.455100 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-01-28 00:59:35.455110 | orchestrator | Wednesday 28 January 2026 00:49:58 +0000 (0:00:01.249) 0:01:41.485 ***** 2026-01-28 00:59:35.455120 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.455131 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.455142 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.455152 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.455163 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.455173 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.455183 | orchestrator | 2026-01-28 00:59:35.455199 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-01-28 00:59:35.455210 | orchestrator | Wednesday 28 January 2026 00:50:43 +0000 (0:00:45.223) 0:02:26.709 ***** 2026-01-28 00:59:35.455220 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-28 00:59:35.455228 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-28 00:59:35.455234 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-28 00:59:35.455240 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.455247 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-28 00:59:35.455253 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-28 00:59:35.455259 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-28 00:59:35.455265 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.455281 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-28 00:59:35.455287 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-28 00:59:35.455293 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-28 00:59:35.455299 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.455305 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-28 00:59:35.455311 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-28 00:59:35.455317 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-28 00:59:35.455323 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.455330 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-28 00:59:35.455336 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-28 00:59:35.455342 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-28 00:59:35.455348 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.455371 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-01-28 00:59:35.455378 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-01-28 00:59:35.455384 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-01-28 00:59:35.455390 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.455396 | orchestrator | 2026-01-28 00:59:35.455402 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-01-28 00:59:35.455408 | orchestrator | Wednesday 28 January 2026 00:50:44 +0000 (0:00:00.770) 0:02:27.480 ***** 2026-01-28 00:59:35.455414 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.455420 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.455426 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.455432 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.455438 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.455444 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.455450 | orchestrator | 2026-01-28 00:59:35.455456 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-01-28 00:59:35.455463 | orchestrator | Wednesday 28 January 2026 00:50:45 +0000 (0:00:00.840) 0:02:28.321 ***** 2026-01-28 00:59:35.455469 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.455475 | orchestrator | 2026-01-28 00:59:35.455481 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-01-28 00:59:35.455487 | orchestrator | Wednesday 28 January 2026 00:50:45 +0000 (0:00:00.155) 0:02:28.476 ***** 2026-01-28 00:59:35.455493 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.455499 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.455505 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.455511 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.455517 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.455523 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.455529 | orchestrator | 2026-01-28 00:59:35.455536 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-01-28 00:59:35.455542 | orchestrator | Wednesday 28 January 2026 00:50:46 +0000 (0:00:00.709) 0:02:29.185 ***** 2026-01-28 00:59:35.455548 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.455554 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.455560 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.455566 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.455572 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.455578 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.455584 | orchestrator | 2026-01-28 00:59:35.455590 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-01-28 00:59:35.455596 | orchestrator | Wednesday 28 January 2026 00:50:47 +0000 (0:00:00.970) 0:02:30.156 ***** 2026-01-28 00:59:35.455608 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.455614 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.455620 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.455626 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.455632 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.455638 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.455644 | orchestrator | 2026-01-28 00:59:35.455650 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-01-28 00:59:35.455656 | orchestrator | Wednesday 28 January 2026 00:50:47 +0000 (0:00:00.779) 0:02:30.936 ***** 2026-01-28 00:59:35.455662 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.455668 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.455674 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.455681 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.455687 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.455693 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.455699 | orchestrator | 2026-01-28 00:59:35.455705 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-01-28 00:59:35.455714 | orchestrator | Wednesday 28 January 2026 00:50:50 +0000 (0:00:02.408) 0:02:33.344 ***** 2026-01-28 00:59:35.455721 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.455727 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.455733 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.455738 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.455744 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.455750 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.455756 | orchestrator | 2026-01-28 00:59:35.455763 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-01-28 00:59:35.455769 | orchestrator | Wednesday 28 January 2026 00:50:51 +0000 (0:00:00.724) 0:02:34.068 ***** 2026-01-28 00:59:35.455775 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.455782 | orchestrator | 2026-01-28 00:59:35.455788 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-01-28 00:59:35.455795 | orchestrator | Wednesday 28 January 2026 00:50:52 +0000 (0:00:01.103) 0:02:35.171 ***** 2026-01-28 00:59:35.455801 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.455807 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.455813 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.455819 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.455825 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.455831 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.455837 | orchestrator | 2026-01-28 00:59:35.455843 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-01-28 00:59:35.455849 | orchestrator | Wednesday 28 January 2026 00:50:53 +0000 (0:00:01.011) 0:02:36.183 ***** 2026-01-28 00:59:35.455856 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.455879 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.455889 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.455900 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.455910 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.455921 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.455928 | orchestrator | 2026-01-28 00:59:35.455934 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-01-28 00:59:35.455940 | orchestrator | Wednesday 28 January 2026 00:50:53 +0000 (0:00:00.801) 0:02:36.985 ***** 2026-01-28 00:59:35.455946 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.455952 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.455970 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.455976 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.455982 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.455989 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.456000 | orchestrator | 2026-01-28 00:59:35.456006 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-01-28 00:59:35.456012 | orchestrator | Wednesday 28 January 2026 00:50:54 +0000 (0:00:00.658) 0:02:37.644 ***** 2026-01-28 00:59:35.456019 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.456025 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.456031 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.456037 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.456043 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.456049 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.456055 | orchestrator | 2026-01-28 00:59:35.456062 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-01-28 00:59:35.456068 | orchestrator | Wednesday 28 January 2026 00:50:55 +0000 (0:00:00.735) 0:02:38.380 ***** 2026-01-28 00:59:35.456074 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.456080 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.456086 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.456092 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.456099 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.456105 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.456111 | orchestrator | 2026-01-28 00:59:35.456117 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-01-28 00:59:35.456123 | orchestrator | Wednesday 28 January 2026 00:50:56 +0000 (0:00:00.719) 0:02:39.099 ***** 2026-01-28 00:59:35.456129 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.456135 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.456142 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.456148 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.456154 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.456160 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.456166 | orchestrator | 2026-01-28 00:59:35.456172 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-01-28 00:59:35.456178 | orchestrator | Wednesday 28 January 2026 00:50:56 +0000 (0:00:00.695) 0:02:39.795 ***** 2026-01-28 00:59:35.456184 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.456191 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.456197 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.456203 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.456209 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.456215 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.456222 | orchestrator | 2026-01-28 00:59:35.456228 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-01-28 00:59:35.456234 | orchestrator | Wednesday 28 January 2026 00:50:57 +0000 (0:00:00.704) 0:02:40.499 ***** 2026-01-28 00:59:35.456240 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.456246 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.456252 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.456258 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.456264 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.456270 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.456276 | orchestrator | 2026-01-28 00:59:35.456283 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-01-28 00:59:35.456289 | orchestrator | Wednesday 28 January 2026 00:50:58 +0000 (0:00:00.670) 0:02:41.170 ***** 2026-01-28 00:59:35.456295 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.456301 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.456307 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.456313 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.456319 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.456325 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.456332 | orchestrator | 2026-01-28 00:59:35.456341 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-01-28 00:59:35.456348 | orchestrator | Wednesday 28 January 2026 00:50:59 +0000 (0:00:01.411) 0:02:42.581 ***** 2026-01-28 00:59:35.456358 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.456364 | orchestrator | 2026-01-28 00:59:35.456371 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-01-28 00:59:35.456377 | orchestrator | Wednesday 28 January 2026 00:51:01 +0000 (0:00:01.567) 0:02:44.149 ***** 2026-01-28 00:59:35.456383 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-01-28 00:59:35.456389 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-01-28 00:59:35.456395 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-01-28 00:59:35.456402 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-01-28 00:59:35.456408 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-01-28 00:59:35.456414 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-01-28 00:59:35.456420 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-01-28 00:59:35.456426 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-01-28 00:59:35.456432 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-01-28 00:59:35.456439 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-01-28 00:59:35.456445 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-01-28 00:59:35.456451 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-01-28 00:59:35.456457 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-01-28 00:59:35.456463 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-01-28 00:59:35.456469 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-01-28 00:59:35.456475 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-01-28 00:59:35.456481 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-01-28 00:59:35.456488 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-01-28 00:59:35.456504 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-01-28 00:59:35.456511 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-01-28 00:59:35.456517 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-01-28 00:59:35.456523 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-01-28 00:59:35.456529 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-01-28 00:59:35.456535 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-01-28 00:59:35.456541 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-01-28 00:59:35.456547 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-01-28 00:59:35.456553 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-01-28 00:59:35.456559 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-01-28 00:59:35.456565 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-01-28 00:59:35.456572 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-01-28 00:59:35.456578 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-01-28 00:59:35.456584 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-01-28 00:59:35.456590 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-01-28 00:59:35.456596 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-01-28 00:59:35.456602 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-01-28 00:59:35.456608 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-01-28 00:59:35.456614 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-01-28 00:59:35.456620 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-01-28 00:59:35.456626 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-01-28 00:59:35.456637 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-01-28 00:59:35.456643 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-01-28 00:59:35.456649 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-01-28 00:59:35.456655 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-01-28 00:59:35.456661 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-01-28 00:59:35.456667 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-01-28 00:59:35.456673 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-28 00:59:35.456679 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-28 00:59:35.456685 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-01-28 00:59:35.456691 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-01-28 00:59:35.456697 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-01-28 00:59:35.456703 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-28 00:59:35.456709 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-28 00:59:35.456716 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-28 00:59:35.456722 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-28 00:59:35.456728 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-28 00:59:35.456737 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-01-28 00:59:35.456743 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-28 00:59:35.456749 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-28 00:59:35.456755 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-28 00:59:35.456761 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-28 00:59:35.456767 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-28 00:59:35.456773 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-01-28 00:59:35.456779 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-28 00:59:35.456785 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-28 00:59:35.456791 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-28 00:59:35.456797 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-28 00:59:35.456803 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-28 00:59:35.456809 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-28 00:59:35.456815 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-01-28 00:59:35.456821 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-28 00:59:35.456827 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-28 00:59:35.456833 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-28 00:59:35.456839 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-28 00:59:35.456845 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-28 00:59:35.456851 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-28 00:59:35.456857 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-01-28 00:59:35.456897 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-28 00:59:35.456910 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-28 00:59:35.456921 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-28 00:59:35.456939 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-28 00:59:35.456946 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-01-28 00:59:35.456952 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-01-28 00:59:35.456958 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-28 00:59:35.456964 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-01-28 00:59:35.456970 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-28 00:59:35.456977 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-01-28 00:59:35.456983 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-01-28 00:59:35.456989 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-01-28 00:59:35.456995 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-01-28 00:59:35.457001 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-01-28 00:59:35.457007 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-01-28 00:59:35.457013 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-01-28 00:59:35.457019 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-01-28 00:59:35.457025 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-01-28 00:59:35.457031 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-01-28 00:59:35.457037 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-01-28 00:59:35.457043 | orchestrator | 2026-01-28 00:59:35.457049 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-01-28 00:59:35.457055 | orchestrator | Wednesday 28 January 2026 00:51:08 +0000 (0:00:07.644) 0:02:51.794 ***** 2026-01-28 00:59:35.457061 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.457067 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.457073 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.457080 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.457086 | orchestrator | 2026-01-28 00:59:35.457092 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-01-28 00:59:35.457098 | orchestrator | Wednesday 28 January 2026 00:51:09 +0000 (0:00:01.167) 0:02:52.961 ***** 2026-01-28 00:59:35.457104 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-28 00:59:35.457111 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-28 00:59:35.457117 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-28 00:59:35.457123 | orchestrator | 2026-01-28 00:59:35.457129 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-01-28 00:59:35.457135 | orchestrator | Wednesday 28 January 2026 00:51:10 +0000 (0:00:00.984) 0:02:53.946 ***** 2026-01-28 00:59:35.457145 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-28 00:59:35.457151 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-28 00:59:35.457157 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-28 00:59:35.457163 | orchestrator | 2026-01-28 00:59:35.457169 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-01-28 00:59:35.457175 | orchestrator | Wednesday 28 January 2026 00:51:12 +0000 (0:00:01.198) 0:02:55.144 ***** 2026-01-28 00:59:35.457181 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.457192 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.457198 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.457204 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.457210 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.457216 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.457222 | orchestrator | 2026-01-28 00:59:35.457228 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-01-28 00:59:35.457234 | orchestrator | Wednesday 28 January 2026 00:51:13 +0000 (0:00:00.926) 0:02:56.070 ***** 2026-01-28 00:59:35.457240 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.457246 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.457252 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.457258 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.457264 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.457270 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.457276 | orchestrator | 2026-01-28 00:59:35.457282 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-01-28 00:59:35.457288 | orchestrator | Wednesday 28 January 2026 00:51:14 +0000 (0:00:01.177) 0:02:57.247 ***** 2026-01-28 00:59:35.457294 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.457300 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.457307 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.457313 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.457319 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.457325 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.457331 | orchestrator | 2026-01-28 00:59:35.457348 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-01-28 00:59:35.457355 | orchestrator | Wednesday 28 January 2026 00:51:14 +0000 (0:00:00.615) 0:02:57.863 ***** 2026-01-28 00:59:35.457361 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.457367 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.457373 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.457379 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.457385 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.457391 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.457397 | orchestrator | 2026-01-28 00:59:35.457403 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-01-28 00:59:35.457409 | orchestrator | Wednesday 28 January 2026 00:51:15 +0000 (0:00:00.729) 0:02:58.592 ***** 2026-01-28 00:59:35.457415 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.457421 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.457427 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.457433 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.457439 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.457445 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.457452 | orchestrator | 2026-01-28 00:59:35.457458 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-01-28 00:59:35.457464 | orchestrator | Wednesday 28 January 2026 00:51:16 +0000 (0:00:00.782) 0:02:59.375 ***** 2026-01-28 00:59:35.457470 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.457476 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.457482 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.457488 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.457494 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.457500 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.457506 | orchestrator | 2026-01-28 00:59:35.457512 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-01-28 00:59:35.457518 | orchestrator | Wednesday 28 January 2026 00:51:17 +0000 (0:00:01.165) 0:03:00.541 ***** 2026-01-28 00:59:35.457524 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.457530 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.457536 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.457542 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.457555 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.457561 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.457567 | orchestrator | 2026-01-28 00:59:35.457573 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-01-28 00:59:35.457580 | orchestrator | Wednesday 28 January 2026 00:51:18 +0000 (0:00:00.573) 0:03:01.114 ***** 2026-01-28 00:59:35.457586 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.457591 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.457597 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.457603 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.457610 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.457615 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.457621 | orchestrator | 2026-01-28 00:59:35.457628 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-01-28 00:59:35.457634 | orchestrator | Wednesday 28 January 2026 00:51:18 +0000 (0:00:00.869) 0:03:01.984 ***** 2026-01-28 00:59:35.457640 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.457646 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.457652 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.457658 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.457664 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.457670 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.457676 | orchestrator | 2026-01-28 00:59:35.457682 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-01-28 00:59:35.457692 | orchestrator | Wednesday 28 January 2026 00:51:22 +0000 (0:00:03.226) 0:03:05.211 ***** 2026-01-28 00:59:35.457698 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.457704 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.457710 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.457716 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.457722 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.457728 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.457734 | orchestrator | 2026-01-28 00:59:35.457741 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-01-28 00:59:35.457747 | orchestrator | Wednesday 28 January 2026 00:51:22 +0000 (0:00:00.798) 0:03:06.010 ***** 2026-01-28 00:59:35.457753 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.457759 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.457765 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.457771 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.457777 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.457783 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.457789 | orchestrator | 2026-01-28 00:59:35.457795 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-01-28 00:59:35.457801 | orchestrator | Wednesday 28 January 2026 00:51:23 +0000 (0:00:00.868) 0:03:06.878 ***** 2026-01-28 00:59:35.457807 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.457813 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.457819 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.457826 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.457832 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.457838 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.457844 | orchestrator | 2026-01-28 00:59:35.457850 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-01-28 00:59:35.457856 | orchestrator | Wednesday 28 January 2026 00:51:24 +0000 (0:00:00.898) 0:03:07.776 ***** 2026-01-28 00:59:35.457891 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-28 00:59:35.457898 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-28 00:59:35.457904 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-28 00:59:35.457915 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.457933 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.457940 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.457946 | orchestrator | 2026-01-28 00:59:35.457952 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-01-28 00:59:35.457958 | orchestrator | Wednesday 28 January 2026 00:51:25 +0000 (0:00:00.594) 0:03:08.371 ***** 2026-01-28 00:59:35.457966 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-01-28 00:59:35.457974 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-01-28 00:59:35.457982 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.457988 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-01-28 00:59:35.457994 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-01-28 00:59:35.458001 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-01-28 00:59:35.458007 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-01-28 00:59:35.458013 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.458057 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.458064 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.458070 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.458076 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.458082 | orchestrator | 2026-01-28 00:59:35.458088 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-01-28 00:59:35.458098 | orchestrator | Wednesday 28 January 2026 00:51:26 +0000 (0:00:00.867) 0:03:09.238 ***** 2026-01-28 00:59:35.458104 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.458110 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.458116 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.458122 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.458128 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.458134 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.458140 | orchestrator | 2026-01-28 00:59:35.458146 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-01-28 00:59:35.458153 | orchestrator | Wednesday 28 January 2026 00:51:26 +0000 (0:00:00.586) 0:03:09.825 ***** 2026-01-28 00:59:35.458159 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.458165 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.458171 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.458183 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.458189 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.458195 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.458201 | orchestrator | 2026-01-28 00:59:35.458208 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-28 00:59:35.458214 | orchestrator | Wednesday 28 January 2026 00:51:27 +0000 (0:00:01.111) 0:03:10.936 ***** 2026-01-28 00:59:35.458220 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.458226 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.458232 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.458238 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.458244 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.458250 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.458256 | orchestrator | 2026-01-28 00:59:35.458262 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-28 00:59:35.458268 | orchestrator | Wednesday 28 January 2026 00:51:28 +0000 (0:00:00.869) 0:03:11.806 ***** 2026-01-28 00:59:35.458274 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.458280 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.458286 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.458292 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.458298 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.458304 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.458310 | orchestrator | 2026-01-28 00:59:35.458316 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-28 00:59:35.458334 | orchestrator | Wednesday 28 January 2026 00:51:30 +0000 (0:00:01.427) 0:03:13.234 ***** 2026-01-28 00:59:35.458341 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.458347 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.458353 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.458359 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.458365 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.458371 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.458377 | orchestrator | 2026-01-28 00:59:35.458383 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-28 00:59:35.458394 | orchestrator | Wednesday 28 January 2026 00:51:31 +0000 (0:00:01.022) 0:03:14.256 ***** 2026-01-28 00:59:35.458404 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.458413 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.458422 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.458432 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.458442 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.458452 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.458476 | orchestrator | 2026-01-28 00:59:35.458487 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-28 00:59:35.458498 | orchestrator | Wednesday 28 January 2026 00:51:32 +0000 (0:00:00.974) 0:03:15.231 ***** 2026-01-28 00:59:35.458505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-28 00:59:35.458511 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-28 00:59:35.458517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-28 00:59:35.458523 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.458529 | orchestrator | 2026-01-28 00:59:35.458536 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-28 00:59:35.458542 | orchestrator | Wednesday 28 January 2026 00:51:32 +0000 (0:00:00.327) 0:03:15.558 ***** 2026-01-28 00:59:35.458548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-28 00:59:35.458554 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-28 00:59:35.458560 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-28 00:59:35.458566 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.458572 | orchestrator | 2026-01-28 00:59:35.458584 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-28 00:59:35.458590 | orchestrator | Wednesday 28 January 2026 00:51:32 +0000 (0:00:00.400) 0:03:15.959 ***** 2026-01-28 00:59:35.458596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-28 00:59:35.458602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-28 00:59:35.458608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-28 00:59:35.458614 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.458620 | orchestrator | 2026-01-28 00:59:35.458626 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-28 00:59:35.458632 | orchestrator | Wednesday 28 January 2026 00:51:33 +0000 (0:00:00.349) 0:03:16.309 ***** 2026-01-28 00:59:35.458638 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.458644 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.458650 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.458656 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.458662 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.458668 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.458674 | orchestrator | 2026-01-28 00:59:35.458680 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-28 00:59:35.458686 | orchestrator | Wednesday 28 January 2026 00:51:33 +0000 (0:00:00.523) 0:03:16.832 ***** 2026-01-28 00:59:35.458692 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-01-28 00:59:35.458698 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-28 00:59:35.458708 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-28 00:59:35.458714 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-28 00:59:35.458721 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.458726 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-01-28 00:59:35.458733 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.458739 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-01-28 00:59:35.458745 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.458751 | orchestrator | 2026-01-28 00:59:35.458757 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-01-28 00:59:35.458763 | orchestrator | Wednesday 28 January 2026 00:51:35 +0000 (0:00:01.900) 0:03:18.733 ***** 2026-01-28 00:59:35.458769 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.458775 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.458781 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.458787 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.458793 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.458798 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.458804 | orchestrator | 2026-01-28 00:59:35.458810 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-28 00:59:35.458816 | orchestrator | Wednesday 28 January 2026 00:51:38 +0000 (0:00:02.538) 0:03:21.271 ***** 2026-01-28 00:59:35.458822 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.458828 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.458834 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.458840 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.458846 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.458852 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.458873 | orchestrator | 2026-01-28 00:59:35.458881 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-28 00:59:35.458887 | orchestrator | Wednesday 28 January 2026 00:51:39 +0000 (0:00:01.036) 0:03:22.307 ***** 2026-01-28 00:59:35.458893 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.458899 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.458905 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.458911 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.458917 | orchestrator | 2026-01-28 00:59:35.458923 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-28 00:59:35.458947 | orchestrator | Wednesday 28 January 2026 00:51:40 +0000 (0:00:00.858) 0:03:23.166 ***** 2026-01-28 00:59:35.458954 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.458960 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.458966 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.458972 | orchestrator | 2026-01-28 00:59:35.458978 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-28 00:59:35.458984 | orchestrator | Wednesday 28 January 2026 00:51:40 +0000 (0:00:00.284) 0:03:23.451 ***** 2026-01-28 00:59:35.458990 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.458996 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.459002 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.459009 | orchestrator | 2026-01-28 00:59:35.459015 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-28 00:59:35.459021 | orchestrator | Wednesday 28 January 2026 00:51:41 +0000 (0:00:01.219) 0:03:24.671 ***** 2026-01-28 00:59:35.459027 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-28 00:59:35.459033 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-28 00:59:35.459039 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-28 00:59:35.459045 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.459051 | orchestrator | 2026-01-28 00:59:35.459057 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-28 00:59:35.459063 | orchestrator | Wednesday 28 January 2026 00:51:42 +0000 (0:00:00.559) 0:03:25.230 ***** 2026-01-28 00:59:35.459070 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.459076 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.459082 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.459088 | orchestrator | 2026-01-28 00:59:35.459094 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-28 00:59:35.459101 | orchestrator | Wednesday 28 January 2026 00:51:42 +0000 (0:00:00.344) 0:03:25.574 ***** 2026-01-28 00:59:35.459107 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.459113 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.459119 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.459125 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.459131 | orchestrator | 2026-01-28 00:59:35.459137 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-28 00:59:35.459143 | orchestrator | Wednesday 28 January 2026 00:51:43 +0000 (0:00:01.076) 0:03:26.650 ***** 2026-01-28 00:59:35.459149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-28 00:59:35.459155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-28 00:59:35.459161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-28 00:59:35.459168 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.459174 | orchestrator | 2026-01-28 00:59:35.459180 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-28 00:59:35.459186 | orchestrator | Wednesday 28 January 2026 00:51:44 +0000 (0:00:00.473) 0:03:27.124 ***** 2026-01-28 00:59:35.459192 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.459198 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.459204 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.459210 | orchestrator | 2026-01-28 00:59:35.459216 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-28 00:59:35.459222 | orchestrator | Wednesday 28 January 2026 00:51:44 +0000 (0:00:00.361) 0:03:27.486 ***** 2026-01-28 00:59:35.459228 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.459235 | orchestrator | 2026-01-28 00:59:35.459241 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-28 00:59:35.459247 | orchestrator | Wednesday 28 January 2026 00:51:44 +0000 (0:00:00.221) 0:03:27.707 ***** 2026-01-28 00:59:35.459256 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.459267 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.459274 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.459280 | orchestrator | 2026-01-28 00:59:35.459286 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-28 00:59:35.459292 | orchestrator | Wednesday 28 January 2026 00:51:44 +0000 (0:00:00.336) 0:03:28.044 ***** 2026-01-28 00:59:35.459298 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.459304 | orchestrator | 2026-01-28 00:59:35.459310 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-28 00:59:35.459316 | orchestrator | Wednesday 28 January 2026 00:51:45 +0000 (0:00:00.225) 0:03:28.269 ***** 2026-01-28 00:59:35.459322 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.459328 | orchestrator | 2026-01-28 00:59:35.459335 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-28 00:59:35.459341 | orchestrator | Wednesday 28 January 2026 00:51:45 +0000 (0:00:00.238) 0:03:28.508 ***** 2026-01-28 00:59:35.459347 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.459353 | orchestrator | 2026-01-28 00:59:35.459359 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-28 00:59:35.459365 | orchestrator | Wednesday 28 January 2026 00:51:45 +0000 (0:00:00.149) 0:03:28.657 ***** 2026-01-28 00:59:35.459371 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.459377 | orchestrator | 2026-01-28 00:59:35.459383 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-28 00:59:35.459389 | orchestrator | Wednesday 28 January 2026 00:51:46 +0000 (0:00:00.833) 0:03:29.491 ***** 2026-01-28 00:59:35.459395 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.459401 | orchestrator | 2026-01-28 00:59:35.459408 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-28 00:59:35.459414 | orchestrator | Wednesday 28 January 2026 00:51:46 +0000 (0:00:00.218) 0:03:29.709 ***** 2026-01-28 00:59:35.459420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-28 00:59:35.459426 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-28 00:59:35.459432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-28 00:59:35.459438 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.459444 | orchestrator | 2026-01-28 00:59:35.459450 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-28 00:59:35.459468 | orchestrator | Wednesday 28 January 2026 00:51:47 +0000 (0:00:00.420) 0:03:30.130 ***** 2026-01-28 00:59:35.459474 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.459480 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.459486 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.459492 | orchestrator | 2026-01-28 00:59:35.459538 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-28 00:59:35.459545 | orchestrator | Wednesday 28 January 2026 00:51:47 +0000 (0:00:00.342) 0:03:30.472 ***** 2026-01-28 00:59:35.459551 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.459557 | orchestrator | 2026-01-28 00:59:35.459563 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-28 00:59:35.459569 | orchestrator | Wednesday 28 January 2026 00:51:47 +0000 (0:00:00.242) 0:03:30.714 ***** 2026-01-28 00:59:35.459575 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.459581 | orchestrator | 2026-01-28 00:59:35.459588 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-28 00:59:35.459594 | orchestrator | Wednesday 28 January 2026 00:51:47 +0000 (0:00:00.224) 0:03:30.939 ***** 2026-01-28 00:59:35.459600 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.459606 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.459612 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.459618 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.459624 | orchestrator | 2026-01-28 00:59:35.459630 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-28 00:59:35.459641 | orchestrator | Wednesday 28 January 2026 00:51:49 +0000 (0:00:01.114) 0:03:32.053 ***** 2026-01-28 00:59:35.459647 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.459653 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.459659 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.459665 | orchestrator | 2026-01-28 00:59:35.459672 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-28 00:59:35.459678 | orchestrator | Wednesday 28 January 2026 00:51:49 +0000 (0:00:00.350) 0:03:32.403 ***** 2026-01-28 00:59:35.459684 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.459690 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.459696 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.459702 | orchestrator | 2026-01-28 00:59:35.459708 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-28 00:59:35.459714 | orchestrator | Wednesday 28 January 2026 00:51:50 +0000 (0:00:01.320) 0:03:33.724 ***** 2026-01-28 00:59:35.459720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-28 00:59:35.459726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-28 00:59:35.459732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-28 00:59:35.459738 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.459745 | orchestrator | 2026-01-28 00:59:35.459751 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-28 00:59:35.459757 | orchestrator | Wednesday 28 January 2026 00:51:51 +0000 (0:00:00.737) 0:03:34.462 ***** 2026-01-28 00:59:35.459763 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.459769 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.459775 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.459781 | orchestrator | 2026-01-28 00:59:35.459787 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-28 00:59:35.459793 | orchestrator | Wednesday 28 January 2026 00:51:51 +0000 (0:00:00.530) 0:03:34.992 ***** 2026-01-28 00:59:35.459799 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.459805 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.459811 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.459821 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.459827 | orchestrator | 2026-01-28 00:59:35.459833 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-28 00:59:35.459839 | orchestrator | Wednesday 28 January 2026 00:51:52 +0000 (0:00:00.777) 0:03:35.769 ***** 2026-01-28 00:59:35.459846 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.459852 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.459939 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.459973 | orchestrator | 2026-01-28 00:59:35.459980 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-28 00:59:35.459986 | orchestrator | Wednesday 28 January 2026 00:51:53 +0000 (0:00:00.431) 0:03:36.200 ***** 2026-01-28 00:59:35.459992 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.459999 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.460005 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.460011 | orchestrator | 2026-01-28 00:59:35.460017 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-28 00:59:35.460023 | orchestrator | Wednesday 28 January 2026 00:51:54 +0000 (0:00:01.055) 0:03:37.256 ***** 2026-01-28 00:59:35.460029 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-28 00:59:35.460035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-28 00:59:35.460041 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-28 00:59:35.460047 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.460053 | orchestrator | 2026-01-28 00:59:35.460059 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-28 00:59:35.460065 | orchestrator | Wednesday 28 January 2026 00:51:54 +0000 (0:00:00.596) 0:03:37.852 ***** 2026-01-28 00:59:35.460079 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.460085 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.460091 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.460097 | orchestrator | 2026-01-28 00:59:35.460103 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-01-28 00:59:35.460109 | orchestrator | Wednesday 28 January 2026 00:51:55 +0000 (0:00:00.341) 0:03:38.194 ***** 2026-01-28 00:59:35.460115 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.460121 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.460127 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.460133 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.460139 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.460162 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.460168 | orchestrator | 2026-01-28 00:59:35.460173 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-28 00:59:35.460179 | orchestrator | Wednesday 28 January 2026 00:51:55 +0000 (0:00:00.850) 0:03:39.044 ***** 2026-01-28 00:59:35.460184 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.460189 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.460195 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.460200 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.460206 | orchestrator | 2026-01-28 00:59:35.460211 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-28 00:59:35.460216 | orchestrator | Wednesday 28 January 2026 00:51:56 +0000 (0:00:00.972) 0:03:40.017 ***** 2026-01-28 00:59:35.460222 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.460227 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.460232 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.460238 | orchestrator | 2026-01-28 00:59:35.460243 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-28 00:59:35.460248 | orchestrator | Wednesday 28 January 2026 00:51:57 +0000 (0:00:00.496) 0:03:40.513 ***** 2026-01-28 00:59:35.460253 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.460259 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.460264 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.460269 | orchestrator | 2026-01-28 00:59:35.460275 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-28 00:59:35.460280 | orchestrator | Wednesday 28 January 2026 00:51:58 +0000 (0:00:01.335) 0:03:41.849 ***** 2026-01-28 00:59:35.460286 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-28 00:59:35.460291 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-28 00:59:35.460296 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-28 00:59:35.460302 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.460307 | orchestrator | 2026-01-28 00:59:35.460312 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-28 00:59:35.460317 | orchestrator | Wednesday 28 January 2026 00:51:59 +0000 (0:00:00.547) 0:03:42.397 ***** 2026-01-28 00:59:35.460323 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.460328 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.460333 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.460339 | orchestrator | 2026-01-28 00:59:35.460344 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-01-28 00:59:35.460349 | orchestrator | 2026-01-28 00:59:35.460354 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-28 00:59:35.460360 | orchestrator | Wednesday 28 January 2026 00:51:59 +0000 (0:00:00.563) 0:03:42.960 ***** 2026-01-28 00:59:35.460365 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.460371 | orchestrator | 2026-01-28 00:59:35.460376 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-28 00:59:35.460386 | orchestrator | Wednesday 28 January 2026 00:52:00 +0000 (0:00:00.659) 0:03:43.620 ***** 2026-01-28 00:59:35.460391 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.460397 | orchestrator | 2026-01-28 00:59:35.460402 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-28 00:59:35.460407 | orchestrator | Wednesday 28 January 2026 00:52:01 +0000 (0:00:00.496) 0:03:44.116 ***** 2026-01-28 00:59:35.460412 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.460422 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.460427 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.460433 | orchestrator | 2026-01-28 00:59:35.460438 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-28 00:59:35.460443 | orchestrator | Wednesday 28 January 2026 00:52:01 +0000 (0:00:00.826) 0:03:44.943 ***** 2026-01-28 00:59:35.460449 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.460454 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.460459 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.460465 | orchestrator | 2026-01-28 00:59:35.460470 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-28 00:59:35.460476 | orchestrator | Wednesday 28 January 2026 00:52:02 +0000 (0:00:00.282) 0:03:45.226 ***** 2026-01-28 00:59:35.460481 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.460486 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.460492 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.460497 | orchestrator | 2026-01-28 00:59:35.460502 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-28 00:59:35.460508 | orchestrator | Wednesday 28 January 2026 00:52:02 +0000 (0:00:00.295) 0:03:45.522 ***** 2026-01-28 00:59:35.460513 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.460518 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.460524 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.460529 | orchestrator | 2026-01-28 00:59:35.460534 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-28 00:59:35.460539 | orchestrator | Wednesday 28 January 2026 00:52:02 +0000 (0:00:00.280) 0:03:45.802 ***** 2026-01-28 00:59:35.460545 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.460550 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.460556 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.460561 | orchestrator | 2026-01-28 00:59:35.460566 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-28 00:59:35.460572 | orchestrator | Wednesday 28 January 2026 00:52:03 +0000 (0:00:00.871) 0:03:46.674 ***** 2026-01-28 00:59:35.460577 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.460582 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.460587 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.460593 | orchestrator | 2026-01-28 00:59:35.460598 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-28 00:59:35.460604 | orchestrator | Wednesday 28 January 2026 00:52:03 +0000 (0:00:00.292) 0:03:46.966 ***** 2026-01-28 00:59:35.460619 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.460625 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.460630 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.460636 | orchestrator | 2026-01-28 00:59:35.460641 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-28 00:59:35.460646 | orchestrator | Wednesday 28 January 2026 00:52:04 +0000 (0:00:00.295) 0:03:47.262 ***** 2026-01-28 00:59:35.460652 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.460657 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.460662 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.460668 | orchestrator | 2026-01-28 00:59:35.460673 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-28 00:59:35.460678 | orchestrator | Wednesday 28 January 2026 00:52:04 +0000 (0:00:00.729) 0:03:47.991 ***** 2026-01-28 00:59:35.460683 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.460695 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.460700 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.460706 | orchestrator | 2026-01-28 00:59:35.460711 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-28 00:59:35.460716 | orchestrator | Wednesday 28 January 2026 00:52:05 +0000 (0:00:00.827) 0:03:48.819 ***** 2026-01-28 00:59:35.460721 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.460727 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.460732 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.460737 | orchestrator | 2026-01-28 00:59:35.460742 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-28 00:59:35.460748 | orchestrator | Wednesday 28 January 2026 00:52:06 +0000 (0:00:00.437) 0:03:49.256 ***** 2026-01-28 00:59:35.460753 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.460758 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.460764 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.460769 | orchestrator | 2026-01-28 00:59:35.460774 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-28 00:59:35.460779 | orchestrator | Wednesday 28 January 2026 00:52:06 +0000 (0:00:00.381) 0:03:49.638 ***** 2026-01-28 00:59:35.460785 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.460790 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.460795 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.460801 | orchestrator | 2026-01-28 00:59:35.460806 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-28 00:59:35.460811 | orchestrator | Wednesday 28 January 2026 00:52:06 +0000 (0:00:00.362) 0:03:50.000 ***** 2026-01-28 00:59:35.460816 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.460822 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.460827 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.460832 | orchestrator | 2026-01-28 00:59:35.460837 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-28 00:59:35.460843 | orchestrator | Wednesday 28 January 2026 00:52:07 +0000 (0:00:00.355) 0:03:50.356 ***** 2026-01-28 00:59:35.460848 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.460853 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.460872 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.460882 | orchestrator | 2026-01-28 00:59:35.460892 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-28 00:59:35.460898 | orchestrator | Wednesday 28 January 2026 00:52:07 +0000 (0:00:00.566) 0:03:50.923 ***** 2026-01-28 00:59:35.460903 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.460908 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.460914 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.460919 | orchestrator | 2026-01-28 00:59:35.460924 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-28 00:59:35.460929 | orchestrator | Wednesday 28 January 2026 00:52:08 +0000 (0:00:00.337) 0:03:51.260 ***** 2026-01-28 00:59:35.460935 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.460940 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.460948 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.460954 | orchestrator | 2026-01-28 00:59:35.460959 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-28 00:59:35.460964 | orchestrator | Wednesday 28 January 2026 00:52:08 +0000 (0:00:00.347) 0:03:51.608 ***** 2026-01-28 00:59:35.460970 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.460975 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.460980 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.460986 | orchestrator | 2026-01-28 00:59:35.460991 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-28 00:59:35.460996 | orchestrator | Wednesday 28 January 2026 00:52:08 +0000 (0:00:00.354) 0:03:51.963 ***** 2026-01-28 00:59:35.461001 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.461007 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.461017 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.461022 | orchestrator | 2026-01-28 00:59:35.461027 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-28 00:59:35.461033 | orchestrator | Wednesday 28 January 2026 00:52:09 +0000 (0:00:00.729) 0:03:52.692 ***** 2026-01-28 00:59:35.461038 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.461043 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.461048 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.461054 | orchestrator | 2026-01-28 00:59:35.461059 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-01-28 00:59:35.461064 | orchestrator | Wednesday 28 January 2026 00:52:10 +0000 (0:00:00.550) 0:03:53.243 ***** 2026-01-28 00:59:35.461069 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.461075 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.461080 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.461085 | orchestrator | 2026-01-28 00:59:35.461091 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-01-28 00:59:35.461096 | orchestrator | Wednesday 28 January 2026 00:52:10 +0000 (0:00:00.475) 0:03:53.718 ***** 2026-01-28 00:59:35.461101 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.461107 | orchestrator | 2026-01-28 00:59:35.461112 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-01-28 00:59:35.461117 | orchestrator | Wednesday 28 January 2026 00:52:11 +0000 (0:00:01.057) 0:03:54.775 ***** 2026-01-28 00:59:35.461122 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.461128 | orchestrator | 2026-01-28 00:59:35.461144 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-01-28 00:59:35.461150 | orchestrator | Wednesday 28 January 2026 00:52:11 +0000 (0:00:00.141) 0:03:54.917 ***** 2026-01-28 00:59:35.461155 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-01-28 00:59:35.461161 | orchestrator | 2026-01-28 00:59:35.461166 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-01-28 00:59:35.461171 | orchestrator | Wednesday 28 January 2026 00:52:12 +0000 (0:00:01.023) 0:03:55.940 ***** 2026-01-28 00:59:35.461177 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.461182 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.461187 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.461193 | orchestrator | 2026-01-28 00:59:35.461198 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-01-28 00:59:35.461203 | orchestrator | Wednesday 28 January 2026 00:52:13 +0000 (0:00:00.414) 0:03:56.354 ***** 2026-01-28 00:59:35.461209 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.461214 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.461219 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.461224 | orchestrator | 2026-01-28 00:59:35.461230 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-01-28 00:59:35.461235 | orchestrator | Wednesday 28 January 2026 00:52:13 +0000 (0:00:00.640) 0:03:56.995 ***** 2026-01-28 00:59:35.461241 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.461246 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.461251 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.461257 | orchestrator | 2026-01-28 00:59:35.461262 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-01-28 00:59:35.461267 | orchestrator | Wednesday 28 January 2026 00:52:15 +0000 (0:00:01.280) 0:03:58.275 ***** 2026-01-28 00:59:35.461273 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.461278 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.461283 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.461288 | orchestrator | 2026-01-28 00:59:35.461294 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-01-28 00:59:35.461299 | orchestrator | Wednesday 28 January 2026 00:52:16 +0000 (0:00:01.125) 0:03:59.401 ***** 2026-01-28 00:59:35.461304 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.461310 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.461319 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.461325 | orchestrator | 2026-01-28 00:59:35.461330 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-01-28 00:59:35.461335 | orchestrator | Wednesday 28 January 2026 00:52:17 +0000 (0:00:00.855) 0:04:00.257 ***** 2026-01-28 00:59:35.461341 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.461346 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.461351 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.461356 | orchestrator | 2026-01-28 00:59:35.461362 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-01-28 00:59:35.461367 | orchestrator | Wednesday 28 January 2026 00:52:18 +0000 (0:00:00.847) 0:04:01.104 ***** 2026-01-28 00:59:35.461372 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.461378 | orchestrator | 2026-01-28 00:59:35.461383 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-01-28 00:59:35.461388 | orchestrator | Wednesday 28 January 2026 00:52:19 +0000 (0:00:01.643) 0:04:02.748 ***** 2026-01-28 00:59:35.461393 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.461399 | orchestrator | 2026-01-28 00:59:35.461415 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-01-28 00:59:35.461421 | orchestrator | Wednesday 28 January 2026 00:52:20 +0000 (0:00:00.677) 0:04:03.425 ***** 2026-01-28 00:59:35.461426 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-28 00:59:35.461432 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 00:59:35.461440 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 00:59:35.461445 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-28 00:59:35.461451 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-28 00:59:35.461456 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-01-28 00:59:35.461461 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-28 00:59:35.461467 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-01-28 00:59:35.461472 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-01-28 00:59:35.461477 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-01-28 00:59:35.461483 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-28 00:59:35.461488 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-01-28 00:59:35.461493 | orchestrator | 2026-01-28 00:59:35.461498 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-01-28 00:59:35.461504 | orchestrator | Wednesday 28 January 2026 00:52:24 +0000 (0:00:03.927) 0:04:07.353 ***** 2026-01-28 00:59:35.461509 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.461514 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.461519 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.461525 | orchestrator | 2026-01-28 00:59:35.461530 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-01-28 00:59:35.461535 | orchestrator | Wednesday 28 January 2026 00:52:25 +0000 (0:00:01.554) 0:04:08.908 ***** 2026-01-28 00:59:35.461540 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.461546 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.461551 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.461556 | orchestrator | 2026-01-28 00:59:35.461561 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-01-28 00:59:35.461567 | orchestrator | Wednesday 28 January 2026 00:52:26 +0000 (0:00:00.362) 0:04:09.271 ***** 2026-01-28 00:59:35.461572 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.461577 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.461582 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.461588 | orchestrator | 2026-01-28 00:59:35.461593 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-01-28 00:59:35.461598 | orchestrator | Wednesday 28 January 2026 00:52:26 +0000 (0:00:00.607) 0:04:09.878 ***** 2026-01-28 00:59:35.461609 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.461627 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.461633 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.461638 | orchestrator | 2026-01-28 00:59:35.461643 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-01-28 00:59:35.461649 | orchestrator | Wednesday 28 January 2026 00:52:28 +0000 (0:00:01.628) 0:04:11.507 ***** 2026-01-28 00:59:35.461654 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.461659 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.461664 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.461670 | orchestrator | 2026-01-28 00:59:35.461675 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-01-28 00:59:35.461681 | orchestrator | Wednesday 28 January 2026 00:52:29 +0000 (0:00:01.491) 0:04:12.998 ***** 2026-01-28 00:59:35.461686 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.461691 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.461696 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.461702 | orchestrator | 2026-01-28 00:59:35.461707 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-01-28 00:59:35.461712 | orchestrator | Wednesday 28 January 2026 00:52:30 +0000 (0:00:00.394) 0:04:13.393 ***** 2026-01-28 00:59:35.461718 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.461723 | orchestrator | 2026-01-28 00:59:35.461728 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-01-28 00:59:35.461734 | orchestrator | Wednesday 28 January 2026 00:52:31 +0000 (0:00:01.009) 0:04:14.402 ***** 2026-01-28 00:59:35.461739 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.461744 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.461750 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.461755 | orchestrator | 2026-01-28 00:59:35.461760 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-01-28 00:59:35.461766 | orchestrator | Wednesday 28 January 2026 00:52:31 +0000 (0:00:00.446) 0:04:14.849 ***** 2026-01-28 00:59:35.461771 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.461776 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.461781 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.461787 | orchestrator | 2026-01-28 00:59:35.461792 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-01-28 00:59:35.461797 | orchestrator | Wednesday 28 January 2026 00:52:32 +0000 (0:00:00.422) 0:04:15.272 ***** 2026-01-28 00:59:35.461802 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.461808 | orchestrator | 2026-01-28 00:59:35.461813 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-01-28 00:59:35.461819 | orchestrator | Wednesday 28 January 2026 00:52:33 +0000 (0:00:00.996) 0:04:16.268 ***** 2026-01-28 00:59:35.461824 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.461829 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.461834 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.461840 | orchestrator | 2026-01-28 00:59:35.461845 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-01-28 00:59:35.461850 | orchestrator | Wednesday 28 January 2026 00:52:35 +0000 (0:00:02.285) 0:04:18.554 ***** 2026-01-28 00:59:35.461856 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.461879 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.461885 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.461890 | orchestrator | 2026-01-28 00:59:35.461896 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-01-28 00:59:35.461901 | orchestrator | Wednesday 28 January 2026 00:52:37 +0000 (0:00:01.644) 0:04:20.198 ***** 2026-01-28 00:59:35.461907 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.461915 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.461921 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.461930 | orchestrator | 2026-01-28 00:59:35.461936 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-01-28 00:59:35.461941 | orchestrator | Wednesday 28 January 2026 00:52:39 +0000 (0:00:02.138) 0:04:22.337 ***** 2026-01-28 00:59:35.461947 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.461952 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.461957 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.461962 | orchestrator | 2026-01-28 00:59:35.461968 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-01-28 00:59:35.461973 | orchestrator | Wednesday 28 January 2026 00:52:41 +0000 (0:00:02.216) 0:04:24.553 ***** 2026-01-28 00:59:35.461978 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.461984 | orchestrator | 2026-01-28 00:59:35.461989 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-01-28 00:59:35.461994 | orchestrator | Wednesday 28 January 2026 00:52:42 +0000 (0:00:00.674) 0:04:25.228 ***** 2026-01-28 00:59:35.461999 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-01-28 00:59:35.462005 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.462010 | orchestrator | 2026-01-28 00:59:35.462041 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-01-28 00:59:35.462048 | orchestrator | Wednesday 28 January 2026 00:53:04 +0000 (0:00:21.827) 0:04:47.055 ***** 2026-01-28 00:59:35.462053 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.462058 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.462064 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.462069 | orchestrator | 2026-01-28 00:59:35.462074 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-01-28 00:59:35.462080 | orchestrator | Wednesday 28 January 2026 00:53:13 +0000 (0:00:08.997) 0:04:56.053 ***** 2026-01-28 00:59:35.462085 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.462090 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.462096 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.462101 | orchestrator | 2026-01-28 00:59:35.462106 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-01-28 00:59:35.462122 | orchestrator | Wednesday 28 January 2026 00:53:13 +0000 (0:00:00.609) 0:04:56.662 ***** 2026-01-28 00:59:35.462129 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46b05ba5269cb7d4b0a89e101839d865c3809187'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-01-28 00:59:35.462137 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46b05ba5269cb7d4b0a89e101839d865c3809187'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-01-28 00:59:35.462144 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46b05ba5269cb7d4b0a89e101839d865c3809187'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-01-28 00:59:35.462151 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46b05ba5269cb7d4b0a89e101839d865c3809187'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-01-28 00:59:35.462161 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46b05ba5269cb7d4b0a89e101839d865c3809187'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-01-28 00:59:35.462167 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46b05ba5269cb7d4b0a89e101839d865c3809187'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__46b05ba5269cb7d4b0a89e101839d865c3809187'}])  2026-01-28 00:59:35.462174 | orchestrator | 2026-01-28 00:59:35.462180 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-28 00:59:35.462188 | orchestrator | Wednesday 28 January 2026 00:53:27 +0000 (0:00:14.160) 0:05:10.823 ***** 2026-01-28 00:59:35.462193 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.462199 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.462204 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.462209 | orchestrator | 2026-01-28 00:59:35.462215 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-01-28 00:59:35.462220 | orchestrator | Wednesday 28 January 2026 00:53:28 +0000 (0:00:00.337) 0:05:11.161 ***** 2026-01-28 00:59:35.462225 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.462231 | orchestrator | 2026-01-28 00:59:35.462236 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-01-28 00:59:35.462241 | orchestrator | Wednesday 28 January 2026 00:53:28 +0000 (0:00:00.757) 0:05:11.918 ***** 2026-01-28 00:59:35.462247 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.462252 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.462258 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.462263 | orchestrator | 2026-01-28 00:59:35.462268 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-01-28 00:59:35.462274 | orchestrator | Wednesday 28 January 2026 00:53:29 +0000 (0:00:00.322) 0:05:12.241 ***** 2026-01-28 00:59:35.462279 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.462284 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.462290 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.462295 | orchestrator | 2026-01-28 00:59:35.462300 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-01-28 00:59:35.462306 | orchestrator | Wednesday 28 January 2026 00:53:29 +0000 (0:00:00.334) 0:05:12.576 ***** 2026-01-28 00:59:35.462311 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-28 00:59:35.462316 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-28 00:59:35.462321 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-28 00:59:35.462327 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.462332 | orchestrator | 2026-01-28 00:59:35.462337 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-01-28 00:59:35.462343 | orchestrator | Wednesday 28 January 2026 00:53:30 +0000 (0:00:00.888) 0:05:13.465 ***** 2026-01-28 00:59:35.462348 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.462353 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.462369 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.462374 | orchestrator | 2026-01-28 00:59:35.462380 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-01-28 00:59:35.462385 | orchestrator | 2026-01-28 00:59:35.462390 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-28 00:59:35.462396 | orchestrator | Wednesday 28 January 2026 00:53:31 +0000 (0:00:00.853) 0:05:14.318 ***** 2026-01-28 00:59:35.462401 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.462410 | orchestrator | 2026-01-28 00:59:35.462415 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-28 00:59:35.462421 | orchestrator | Wednesday 28 January 2026 00:53:31 +0000 (0:00:00.612) 0:05:14.931 ***** 2026-01-28 00:59:35.462426 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.462431 | orchestrator | 2026-01-28 00:59:35.462437 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-28 00:59:35.462442 | orchestrator | Wednesday 28 January 2026 00:53:32 +0000 (0:00:00.935) 0:05:15.867 ***** 2026-01-28 00:59:35.462447 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.462453 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.462458 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.462463 | orchestrator | 2026-01-28 00:59:35.462469 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-28 00:59:35.462474 | orchestrator | Wednesday 28 January 2026 00:53:33 +0000 (0:00:00.741) 0:05:16.608 ***** 2026-01-28 00:59:35.462479 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.462485 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.462490 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.462495 | orchestrator | 2026-01-28 00:59:35.462500 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-28 00:59:35.462506 | orchestrator | Wednesday 28 January 2026 00:53:33 +0000 (0:00:00.313) 0:05:16.922 ***** 2026-01-28 00:59:35.462511 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.462516 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.462522 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.462527 | orchestrator | 2026-01-28 00:59:35.462532 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-28 00:59:35.462537 | orchestrator | Wednesday 28 January 2026 00:53:34 +0000 (0:00:00.763) 0:05:17.685 ***** 2026-01-28 00:59:35.462543 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.462548 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.462553 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.462559 | orchestrator | 2026-01-28 00:59:35.462564 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-28 00:59:35.462569 | orchestrator | Wednesday 28 January 2026 00:53:34 +0000 (0:00:00.319) 0:05:18.005 ***** 2026-01-28 00:59:35.462575 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.462580 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.462585 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.462591 | orchestrator | 2026-01-28 00:59:35.462596 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-28 00:59:35.462601 | orchestrator | Wednesday 28 January 2026 00:53:35 +0000 (0:00:00.764) 0:05:18.769 ***** 2026-01-28 00:59:35.462607 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.462612 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.462617 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.462622 | orchestrator | 2026-01-28 00:59:35.462628 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-28 00:59:35.462636 | orchestrator | Wednesday 28 January 2026 00:53:36 +0000 (0:00:00.374) 0:05:19.144 ***** 2026-01-28 00:59:35.462641 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.462647 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.462652 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.462657 | orchestrator | 2026-01-28 00:59:35.462663 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-28 00:59:35.462668 | orchestrator | Wednesday 28 January 2026 00:53:36 +0000 (0:00:00.604) 0:05:19.748 ***** 2026-01-28 00:59:35.462673 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.462678 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.462684 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.462689 | orchestrator | 2026-01-28 00:59:35.462697 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-28 00:59:35.462702 | orchestrator | Wednesday 28 January 2026 00:53:37 +0000 (0:00:00.822) 0:05:20.570 ***** 2026-01-28 00:59:35.462708 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.462713 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.462718 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.462724 | orchestrator | 2026-01-28 00:59:35.462729 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-28 00:59:35.462734 | orchestrator | Wednesday 28 January 2026 00:53:38 +0000 (0:00:00.795) 0:05:21.366 ***** 2026-01-28 00:59:35.462740 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.462745 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.462750 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.462755 | orchestrator | 2026-01-28 00:59:35.462761 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-28 00:59:35.462766 | orchestrator | Wednesday 28 January 2026 00:53:38 +0000 (0:00:00.313) 0:05:21.679 ***** 2026-01-28 00:59:35.462771 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.462777 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.462782 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.462787 | orchestrator | 2026-01-28 00:59:35.462792 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-28 00:59:35.462798 | orchestrator | Wednesday 28 January 2026 00:53:39 +0000 (0:00:00.642) 0:05:22.322 ***** 2026-01-28 00:59:35.462803 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.462808 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.462814 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.462819 | orchestrator | 2026-01-28 00:59:35.462824 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-28 00:59:35.462840 | orchestrator | Wednesday 28 January 2026 00:53:39 +0000 (0:00:00.409) 0:05:22.731 ***** 2026-01-28 00:59:35.462846 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.462851 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.462857 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.462876 | orchestrator | 2026-01-28 00:59:35.462882 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-28 00:59:35.462887 | orchestrator | Wednesday 28 January 2026 00:53:39 +0000 (0:00:00.313) 0:05:23.044 ***** 2026-01-28 00:59:35.462892 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.462898 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.462903 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.462908 | orchestrator | 2026-01-28 00:59:35.462913 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-28 00:59:35.462919 | orchestrator | Wednesday 28 January 2026 00:53:40 +0000 (0:00:00.309) 0:05:23.353 ***** 2026-01-28 00:59:35.462924 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.462929 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.462935 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.462940 | orchestrator | 2026-01-28 00:59:35.462945 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-28 00:59:35.462950 | orchestrator | Wednesday 28 January 2026 00:53:40 +0000 (0:00:00.333) 0:05:23.686 ***** 2026-01-28 00:59:35.462956 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.462961 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.462966 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.462971 | orchestrator | 2026-01-28 00:59:35.462977 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-28 00:59:35.462982 | orchestrator | Wednesday 28 January 2026 00:53:41 +0000 (0:00:00.576) 0:05:24.263 ***** 2026-01-28 00:59:35.462987 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.462993 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.462998 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.463003 | orchestrator | 2026-01-28 00:59:35.463009 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-28 00:59:35.463018 | orchestrator | Wednesday 28 January 2026 00:53:41 +0000 (0:00:00.396) 0:05:24.659 ***** 2026-01-28 00:59:35.463023 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.463029 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.463034 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.463039 | orchestrator | 2026-01-28 00:59:35.463044 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-28 00:59:35.463050 | orchestrator | Wednesday 28 January 2026 00:53:41 +0000 (0:00:00.348) 0:05:25.008 ***** 2026-01-28 00:59:35.463055 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.463060 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.463065 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.463071 | orchestrator | 2026-01-28 00:59:35.463076 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-01-28 00:59:35.463081 | orchestrator | Wednesday 28 January 2026 00:53:42 +0000 (0:00:00.795) 0:05:25.803 ***** 2026-01-28 00:59:35.463087 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-28 00:59:35.463092 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-28 00:59:35.463098 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-28 00:59:35.463103 | orchestrator | 2026-01-28 00:59:35.463108 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-01-28 00:59:35.463113 | orchestrator | Wednesday 28 January 2026 00:53:43 +0000 (0:00:00.695) 0:05:26.499 ***** 2026-01-28 00:59:35.463119 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.463124 | orchestrator | 2026-01-28 00:59:35.463129 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-01-28 00:59:35.463137 | orchestrator | Wednesday 28 January 2026 00:53:44 +0000 (0:00:00.635) 0:05:27.134 ***** 2026-01-28 00:59:35.463143 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.463148 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.463154 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.463159 | orchestrator | 2026-01-28 00:59:35.463164 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-01-28 00:59:35.463169 | orchestrator | Wednesday 28 January 2026 00:53:44 +0000 (0:00:00.891) 0:05:28.026 ***** 2026-01-28 00:59:35.463175 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.463180 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.463185 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.463190 | orchestrator | 2026-01-28 00:59:35.463196 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-01-28 00:59:35.463201 | orchestrator | Wednesday 28 January 2026 00:53:45 +0000 (0:00:00.629) 0:05:28.655 ***** 2026-01-28 00:59:35.463206 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-28 00:59:35.463211 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-28 00:59:35.463217 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-28 00:59:35.463222 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-01-28 00:59:35.463227 | orchestrator | 2026-01-28 00:59:35.463232 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-01-28 00:59:35.463238 | orchestrator | Wednesday 28 January 2026 00:53:55 +0000 (0:00:10.160) 0:05:38.815 ***** 2026-01-28 00:59:35.463243 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.463248 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.463253 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.463259 | orchestrator | 2026-01-28 00:59:35.463264 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-01-28 00:59:35.463269 | orchestrator | Wednesday 28 January 2026 00:53:56 +0000 (0:00:00.371) 0:05:39.187 ***** 2026-01-28 00:59:35.463275 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-28 00:59:35.463280 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-28 00:59:35.463285 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-28 00:59:35.463296 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-28 00:59:35.463301 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 00:59:35.463318 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 00:59:35.463324 | orchestrator | 2026-01-28 00:59:35.463329 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-01-28 00:59:35.463334 | orchestrator | Wednesday 28 January 2026 00:53:58 +0000 (0:00:02.359) 0:05:41.546 ***** 2026-01-28 00:59:35.463339 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-28 00:59:35.463345 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-28 00:59:35.463350 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-28 00:59:35.463355 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-28 00:59:35.463360 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-01-28 00:59:35.463366 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-01-28 00:59:35.463371 | orchestrator | 2026-01-28 00:59:35.463376 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-01-28 00:59:35.463382 | orchestrator | Wednesday 28 January 2026 00:53:59 +0000 (0:00:01.317) 0:05:42.864 ***** 2026-01-28 00:59:35.463387 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.463392 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.463398 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.463403 | orchestrator | 2026-01-28 00:59:35.463408 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-01-28 00:59:35.463414 | orchestrator | Wednesday 28 January 2026 00:54:01 +0000 (0:00:01.234) 0:05:44.098 ***** 2026-01-28 00:59:35.463419 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.463424 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.463429 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.463435 | orchestrator | 2026-01-28 00:59:35.463440 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-01-28 00:59:35.463445 | orchestrator | Wednesday 28 January 2026 00:54:01 +0000 (0:00:00.306) 0:05:44.405 ***** 2026-01-28 00:59:35.463451 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.463456 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.463461 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.463466 | orchestrator | 2026-01-28 00:59:35.463472 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-01-28 00:59:35.463477 | orchestrator | Wednesday 28 January 2026 00:54:01 +0000 (0:00:00.305) 0:05:44.710 ***** 2026-01-28 00:59:35.463482 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.463488 | orchestrator | 2026-01-28 00:59:35.463493 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-01-28 00:59:35.463498 | orchestrator | Wednesday 28 January 2026 00:54:02 +0000 (0:00:00.763) 0:05:45.474 ***** 2026-01-28 00:59:35.463504 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.463509 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.463514 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.463519 | orchestrator | 2026-01-28 00:59:35.463525 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-01-28 00:59:35.463530 | orchestrator | Wednesday 28 January 2026 00:54:02 +0000 (0:00:00.375) 0:05:45.849 ***** 2026-01-28 00:59:35.463535 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.463541 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.463546 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.463551 | orchestrator | 2026-01-28 00:59:35.463557 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-01-28 00:59:35.463562 | orchestrator | Wednesday 28 January 2026 00:54:03 +0000 (0:00:00.358) 0:05:46.208 ***** 2026-01-28 00:59:35.463567 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.463576 | orchestrator | 2026-01-28 00:59:35.463584 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-01-28 00:59:35.463590 | orchestrator | Wednesday 28 January 2026 00:54:04 +0000 (0:00:00.897) 0:05:47.105 ***** 2026-01-28 00:59:35.463595 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.463600 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.463606 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.463611 | orchestrator | 2026-01-28 00:59:35.463616 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-01-28 00:59:35.463621 | orchestrator | Wednesday 28 January 2026 00:54:05 +0000 (0:00:01.395) 0:05:48.500 ***** 2026-01-28 00:59:35.463627 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.463632 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.463637 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.463643 | orchestrator | 2026-01-28 00:59:35.463648 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-01-28 00:59:35.463653 | orchestrator | Wednesday 28 January 2026 00:54:06 +0000 (0:00:01.398) 0:05:49.898 ***** 2026-01-28 00:59:35.463659 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.463664 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.463669 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.463674 | orchestrator | 2026-01-28 00:59:35.463680 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-01-28 00:59:35.463685 | orchestrator | Wednesday 28 January 2026 00:54:08 +0000 (0:00:02.104) 0:05:52.003 ***** 2026-01-28 00:59:35.463690 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.463696 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.463701 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.463706 | orchestrator | 2026-01-28 00:59:35.463711 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-01-28 00:59:35.463717 | orchestrator | Wednesday 28 January 2026 00:54:11 +0000 (0:00:02.556) 0:05:54.560 ***** 2026-01-28 00:59:35.463722 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.463727 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.463733 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-01-28 00:59:35.463738 | orchestrator | 2026-01-28 00:59:35.463743 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-01-28 00:59:35.463748 | orchestrator | Wednesday 28 January 2026 00:54:11 +0000 (0:00:00.411) 0:05:54.971 ***** 2026-01-28 00:59:35.463763 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-01-28 00:59:35.463769 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-01-28 00:59:35.463775 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-01-28 00:59:35.463780 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-01-28 00:59:35.463786 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-01-28 00:59:35.463791 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-01-28 00:59:35.463796 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-28 00:59:35.463801 | orchestrator | 2026-01-28 00:59:35.463807 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-01-28 00:59:35.463812 | orchestrator | Wednesday 28 January 2026 00:54:48 +0000 (0:00:36.116) 0:06:31.088 ***** 2026-01-28 00:59:35.463817 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-01-28 00:59:35.463823 | orchestrator | 2026-01-28 00:59:35.463828 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-01-28 00:59:35.463833 | orchestrator | Wednesday 28 January 2026 00:54:49 +0000 (0:00:01.264) 0:06:32.353 ***** 2026-01-28 00:59:35.463843 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.463848 | orchestrator | 2026-01-28 00:59:35.463854 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-01-28 00:59:35.463888 | orchestrator | Wednesday 28 January 2026 00:54:49 +0000 (0:00:00.337) 0:06:32.690 ***** 2026-01-28 00:59:35.463896 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.463901 | orchestrator | 2026-01-28 00:59:35.463906 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-01-28 00:59:35.463912 | orchestrator | Wednesday 28 January 2026 00:54:49 +0000 (0:00:00.150) 0:06:32.840 ***** 2026-01-28 00:59:35.463917 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-01-28 00:59:35.463922 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-01-28 00:59:35.463927 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-01-28 00:59:35.463933 | orchestrator | 2026-01-28 00:59:35.463938 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-01-28 00:59:35.463943 | orchestrator | Wednesday 28 January 2026 00:54:56 +0000 (0:00:06.475) 0:06:39.316 ***** 2026-01-28 00:59:35.463949 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-01-28 00:59:35.463954 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-01-28 00:59:35.463959 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-01-28 00:59:35.463965 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-01-28 00:59:35.463970 | orchestrator | 2026-01-28 00:59:35.463975 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-28 00:59:35.463981 | orchestrator | Wednesday 28 January 2026 00:55:01 +0000 (0:00:05.354) 0:06:44.671 ***** 2026-01-28 00:59:35.463986 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.463991 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.463996 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.464002 | orchestrator | 2026-01-28 00:59:35.464012 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-01-28 00:59:35.464017 | orchestrator | Wednesday 28 January 2026 00:55:02 +0000 (0:00:00.825) 0:06:45.496 ***** 2026-01-28 00:59:35.464023 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.464028 | orchestrator | 2026-01-28 00:59:35.464034 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-01-28 00:59:35.464039 | orchestrator | Wednesday 28 January 2026 00:55:03 +0000 (0:00:00.793) 0:06:46.290 ***** 2026-01-28 00:59:35.464044 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.464049 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.464055 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.464060 | orchestrator | 2026-01-28 00:59:35.464065 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-01-28 00:59:35.464071 | orchestrator | Wednesday 28 January 2026 00:55:03 +0000 (0:00:00.334) 0:06:46.624 ***** 2026-01-28 00:59:35.464076 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.464081 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.464086 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.464092 | orchestrator | 2026-01-28 00:59:35.464097 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-01-28 00:59:35.464102 | orchestrator | Wednesday 28 January 2026 00:55:04 +0000 (0:00:01.244) 0:06:47.869 ***** 2026-01-28 00:59:35.464108 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-01-28 00:59:35.464113 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-01-28 00:59:35.464118 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-01-28 00:59:35.464124 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.464129 | orchestrator | 2026-01-28 00:59:35.464134 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-01-28 00:59:35.464144 | orchestrator | Wednesday 28 January 2026 00:55:05 +0000 (0:00:00.632) 0:06:48.501 ***** 2026-01-28 00:59:35.464149 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.464154 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.464160 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.464165 | orchestrator | 2026-01-28 00:59:35.464170 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-01-28 00:59:35.464176 | orchestrator | 2026-01-28 00:59:35.464181 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-28 00:59:35.464197 | orchestrator | Wednesday 28 January 2026 00:55:06 +0000 (0:00:00.862) 0:06:49.364 ***** 2026-01-28 00:59:35.464203 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.464208 | orchestrator | 2026-01-28 00:59:35.464214 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-28 00:59:35.464219 | orchestrator | Wednesday 28 January 2026 00:55:06 +0000 (0:00:00.513) 0:06:49.877 ***** 2026-01-28 00:59:35.464224 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.464230 | orchestrator | 2026-01-28 00:59:35.464235 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-28 00:59:35.464240 | orchestrator | Wednesday 28 January 2026 00:55:07 +0000 (0:00:00.722) 0:06:50.599 ***** 2026-01-28 00:59:35.464246 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.464251 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.464256 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.464262 | orchestrator | 2026-01-28 00:59:35.464267 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-28 00:59:35.464273 | orchestrator | Wednesday 28 January 2026 00:55:07 +0000 (0:00:00.320) 0:06:50.919 ***** 2026-01-28 00:59:35.464278 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.464283 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.464288 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.464294 | orchestrator | 2026-01-28 00:59:35.464299 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-28 00:59:35.464304 | orchestrator | Wednesday 28 January 2026 00:55:08 +0000 (0:00:00.708) 0:06:51.628 ***** 2026-01-28 00:59:35.464310 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.464315 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.464320 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.464325 | orchestrator | 2026-01-28 00:59:35.464331 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-28 00:59:35.464336 | orchestrator | Wednesday 28 January 2026 00:55:09 +0000 (0:00:00.758) 0:06:52.386 ***** 2026-01-28 00:59:35.464342 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.464347 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.464352 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.464358 | orchestrator | 2026-01-28 00:59:35.464363 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-28 00:59:35.464368 | orchestrator | Wednesday 28 January 2026 00:55:10 +0000 (0:00:01.043) 0:06:53.430 ***** 2026-01-28 00:59:35.464374 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.464379 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.464384 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.464390 | orchestrator | 2026-01-28 00:59:35.464395 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-28 00:59:35.464400 | orchestrator | Wednesday 28 January 2026 00:55:10 +0000 (0:00:00.318) 0:06:53.748 ***** 2026-01-28 00:59:35.464406 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.464411 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.464416 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.464422 | orchestrator | 2026-01-28 00:59:35.464427 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-28 00:59:35.464432 | orchestrator | Wednesday 28 January 2026 00:55:11 +0000 (0:00:00.302) 0:06:54.051 ***** 2026-01-28 00:59:35.464442 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.464447 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.464452 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.464458 | orchestrator | 2026-01-28 00:59:35.464463 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-28 00:59:35.464471 | orchestrator | Wednesday 28 January 2026 00:55:11 +0000 (0:00:00.300) 0:06:54.351 ***** 2026-01-28 00:59:35.464477 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.464482 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.464487 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.464493 | orchestrator | 2026-01-28 00:59:35.464498 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-28 00:59:35.464504 | orchestrator | Wednesday 28 January 2026 00:55:12 +0000 (0:00:00.964) 0:06:55.316 ***** 2026-01-28 00:59:35.464509 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.464514 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.464519 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.464525 | orchestrator | 2026-01-28 00:59:35.464530 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-28 00:59:35.464535 | orchestrator | Wednesday 28 January 2026 00:55:13 +0000 (0:00:00.764) 0:06:56.080 ***** 2026-01-28 00:59:35.464541 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.464546 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.464551 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.464556 | orchestrator | 2026-01-28 00:59:35.464562 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-28 00:59:35.464567 | orchestrator | Wednesday 28 January 2026 00:55:13 +0000 (0:00:00.352) 0:06:56.432 ***** 2026-01-28 00:59:35.464572 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.464578 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.464583 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.464589 | orchestrator | 2026-01-28 00:59:35.464594 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-28 00:59:35.464599 | orchestrator | Wednesday 28 January 2026 00:55:13 +0000 (0:00:00.298) 0:06:56.731 ***** 2026-01-28 00:59:35.464605 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.464610 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.464615 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.464621 | orchestrator | 2026-01-28 00:59:35.464626 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-28 00:59:35.464631 | orchestrator | Wednesday 28 January 2026 00:55:14 +0000 (0:00:00.733) 0:06:57.464 ***** 2026-01-28 00:59:35.464637 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.464642 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.464648 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.464653 | orchestrator | 2026-01-28 00:59:35.464658 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-28 00:59:35.464673 | orchestrator | Wednesday 28 January 2026 00:55:14 +0000 (0:00:00.423) 0:06:57.887 ***** 2026-01-28 00:59:35.464679 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.464685 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.464690 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.464695 | orchestrator | 2026-01-28 00:59:35.464701 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-28 00:59:35.464706 | orchestrator | Wednesday 28 January 2026 00:55:15 +0000 (0:00:00.374) 0:06:58.262 ***** 2026-01-28 00:59:35.464711 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.464717 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.464722 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.464727 | orchestrator | 2026-01-28 00:59:35.464733 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-28 00:59:35.464738 | orchestrator | Wednesday 28 January 2026 00:55:15 +0000 (0:00:00.325) 0:06:58.588 ***** 2026-01-28 00:59:35.464743 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.464754 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.464759 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.464764 | orchestrator | 2026-01-28 00:59:35.464769 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-28 00:59:35.464775 | orchestrator | Wednesday 28 January 2026 00:55:16 +0000 (0:00:00.592) 0:06:59.180 ***** 2026-01-28 00:59:35.464780 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.464786 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.464791 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.464796 | orchestrator | 2026-01-28 00:59:35.464801 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-28 00:59:35.464807 | orchestrator | Wednesday 28 January 2026 00:55:16 +0000 (0:00:00.324) 0:06:59.504 ***** 2026-01-28 00:59:35.464812 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.464817 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.464822 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.464828 | orchestrator | 2026-01-28 00:59:35.464833 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-28 00:59:35.464838 | orchestrator | Wednesday 28 January 2026 00:55:16 +0000 (0:00:00.335) 0:06:59.839 ***** 2026-01-28 00:59:35.464844 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.464849 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.464854 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.464876 | orchestrator | 2026-01-28 00:59:35.464885 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-01-28 00:59:35.464893 | orchestrator | Wednesday 28 January 2026 00:55:17 +0000 (0:00:00.807) 0:07:00.646 ***** 2026-01-28 00:59:35.464902 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.464910 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.464919 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.464928 | orchestrator | 2026-01-28 00:59:35.464936 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-01-28 00:59:35.464945 | orchestrator | Wednesday 28 January 2026 00:55:17 +0000 (0:00:00.365) 0:07:01.012 ***** 2026-01-28 00:59:35.464954 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-28 00:59:35.464963 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-28 00:59:35.464973 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-28 00:59:35.464982 | orchestrator | 2026-01-28 00:59:35.464991 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-01-28 00:59:35.465000 | orchestrator | Wednesday 28 January 2026 00:55:18 +0000 (0:00:00.626) 0:07:01.638 ***** 2026-01-28 00:59:35.465006 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.465012 | orchestrator | 2026-01-28 00:59:35.465021 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-01-28 00:59:35.465026 | orchestrator | Wednesday 28 January 2026 00:55:19 +0000 (0:00:00.516) 0:07:02.155 ***** 2026-01-28 00:59:35.465032 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.465037 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.465042 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.465048 | orchestrator | 2026-01-28 00:59:35.465053 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-01-28 00:59:35.465058 | orchestrator | Wednesday 28 January 2026 00:55:19 +0000 (0:00:00.608) 0:07:02.763 ***** 2026-01-28 00:59:35.465064 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.465069 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.465074 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.465080 | orchestrator | 2026-01-28 00:59:35.465085 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-01-28 00:59:35.465090 | orchestrator | Wednesday 28 January 2026 00:55:20 +0000 (0:00:00.318) 0:07:03.082 ***** 2026-01-28 00:59:35.465095 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.465106 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.465111 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.465117 | orchestrator | 2026-01-28 00:59:35.465122 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-01-28 00:59:35.465128 | orchestrator | Wednesday 28 January 2026 00:55:20 +0000 (0:00:00.652) 0:07:03.734 ***** 2026-01-28 00:59:35.465133 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.465138 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.465144 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.465149 | orchestrator | 2026-01-28 00:59:35.465154 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-01-28 00:59:35.465159 | orchestrator | Wednesday 28 January 2026 00:55:21 +0000 (0:00:00.370) 0:07:04.105 ***** 2026-01-28 00:59:35.465165 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-28 00:59:35.465170 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-28 00:59:35.465176 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-01-28 00:59:35.465192 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-28 00:59:35.465198 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-28 00:59:35.465204 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-28 00:59:35.465209 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-28 00:59:35.465214 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-28 00:59:35.465220 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-28 00:59:35.465225 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-28 00:59:35.465230 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-28 00:59:35.465236 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-01-28 00:59:35.465241 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-01-28 00:59:35.465246 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-01-28 00:59:35.465251 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-01-28 00:59:35.465257 | orchestrator | 2026-01-28 00:59:35.465262 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-01-28 00:59:35.465267 | orchestrator | Wednesday 28 January 2026 00:55:25 +0000 (0:00:04.460) 0:07:08.566 ***** 2026-01-28 00:59:35.465272 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.465278 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.465283 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.465288 | orchestrator | 2026-01-28 00:59:35.465294 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-01-28 00:59:35.465299 | orchestrator | Wednesday 28 January 2026 00:55:25 +0000 (0:00:00.318) 0:07:08.884 ***** 2026-01-28 00:59:35.465304 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.465310 | orchestrator | 2026-01-28 00:59:35.465315 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-01-28 00:59:35.465320 | orchestrator | Wednesday 28 January 2026 00:55:26 +0000 (0:00:00.593) 0:07:09.478 ***** 2026-01-28 00:59:35.465326 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-28 00:59:35.465331 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-28 00:59:35.465336 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-01-28 00:59:35.465342 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-01-28 00:59:35.465352 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-01-28 00:59:35.465358 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-01-28 00:59:35.465363 | orchestrator | 2026-01-28 00:59:35.465368 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-01-28 00:59:35.465374 | orchestrator | Wednesday 28 January 2026 00:55:27 +0000 (0:00:01.440) 0:07:10.918 ***** 2026-01-28 00:59:35.465379 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 00:59:35.465384 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-28 00:59:35.465392 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-28 00:59:35.465398 | orchestrator | 2026-01-28 00:59:35.465403 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-01-28 00:59:35.465409 | orchestrator | Wednesday 28 January 2026 00:55:29 +0000 (0:00:02.066) 0:07:12.985 ***** 2026-01-28 00:59:35.465414 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-28 00:59:35.465419 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-28 00:59:35.465425 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.465430 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-28 00:59:35.465435 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-28 00:59:35.465441 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.465446 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-28 00:59:35.465451 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-28 00:59:35.465457 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.465462 | orchestrator | 2026-01-28 00:59:35.465467 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-01-28 00:59:35.465473 | orchestrator | Wednesday 28 January 2026 00:55:31 +0000 (0:00:01.485) 0:07:14.470 ***** 2026-01-28 00:59:35.465478 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-28 00:59:35.465483 | orchestrator | 2026-01-28 00:59:35.465489 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-01-28 00:59:35.465494 | orchestrator | Wednesday 28 January 2026 00:55:33 +0000 (0:00:02.085) 0:07:16.556 ***** 2026-01-28 00:59:35.465500 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.465505 | orchestrator | 2026-01-28 00:59:35.465510 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-01-28 00:59:35.465516 | orchestrator | Wednesday 28 January 2026 00:55:33 +0000 (0:00:00.479) 0:07:17.036 ***** 2026-01-28 00:59:35.465521 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e', 'data_vg': 'ceph-60e20e1d-9b2b-5d4f-86ba-deb7f624d16e'}) 2026-01-28 00:59:35.465527 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe', 'data_vg': 'ceph-12f0ff1a-fab7-5a0a-bd83-09da1ae004fe'}) 2026-01-28 00:59:35.465544 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-e01643e5-7b60-5b49-bc8a-cfec0728964e', 'data_vg': 'ceph-e01643e5-7b60-5b49-bc8a-cfec0728964e'}) 2026-01-28 00:59:35.465551 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-ae2f77e7-beca-5176-aee2-b01d14f9def4', 'data_vg': 'ceph-ae2f77e7-beca-5176-aee2-b01d14f9def4'}) 2026-01-28 00:59:35.465558 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6', 'data_vg': 'ceph-6a7f1cd8-9d71-5746-99fd-f6abb350b2d6'}) 2026-01-28 00:59:35.465564 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-cf0ea652-88a6-5aa8-929a-ed9131fd0cef', 'data_vg': 'ceph-cf0ea652-88a6-5aa8-929a-ed9131fd0cef'}) 2026-01-28 00:59:35.465570 | orchestrator | 2026-01-28 00:59:35.465576 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-01-28 00:59:35.465582 | orchestrator | Wednesday 28 January 2026 00:56:15 +0000 (0:00:41.104) 0:07:58.140 ***** 2026-01-28 00:59:35.465593 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.465599 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.465605 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.465611 | orchestrator | 2026-01-28 00:59:35.465617 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-01-28 00:59:35.465623 | orchestrator | Wednesday 28 January 2026 00:56:15 +0000 (0:00:00.376) 0:07:58.516 ***** 2026-01-28 00:59:35.465629 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.465635 | orchestrator | 2026-01-28 00:59:35.465641 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-01-28 00:59:35.465647 | orchestrator | Wednesday 28 January 2026 00:56:15 +0000 (0:00:00.529) 0:07:59.045 ***** 2026-01-28 00:59:35.465653 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.465659 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.465666 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.465672 | orchestrator | 2026-01-28 00:59:35.465678 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-01-28 00:59:35.465684 | orchestrator | Wednesday 28 January 2026 00:56:16 +0000 (0:00:00.998) 0:08:00.044 ***** 2026-01-28 00:59:35.465690 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.465696 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.465702 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.465708 | orchestrator | 2026-01-28 00:59:35.465714 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-01-28 00:59:35.465720 | orchestrator | Wednesday 28 January 2026 00:56:19 +0000 (0:00:02.771) 0:08:02.816 ***** 2026-01-28 00:59:35.465727 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.465733 | orchestrator | 2026-01-28 00:59:35.465739 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-01-28 00:59:35.465745 | orchestrator | Wednesday 28 January 2026 00:56:20 +0000 (0:00:00.526) 0:08:03.342 ***** 2026-01-28 00:59:35.465751 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.465757 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.465763 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.465769 | orchestrator | 2026-01-28 00:59:35.465775 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-01-28 00:59:35.465781 | orchestrator | Wednesday 28 January 2026 00:56:21 +0000 (0:00:01.338) 0:08:04.680 ***** 2026-01-28 00:59:35.465787 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.465797 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.465803 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.465809 | orchestrator | 2026-01-28 00:59:35.465815 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-01-28 00:59:35.465821 | orchestrator | Wednesday 28 January 2026 00:56:22 +0000 (0:00:01.056) 0:08:05.737 ***** 2026-01-28 00:59:35.465827 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.465833 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.465839 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.465845 | orchestrator | 2026-01-28 00:59:35.465851 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-01-28 00:59:35.465857 | orchestrator | Wednesday 28 January 2026 00:56:24 +0000 (0:00:01.721) 0:08:07.458 ***** 2026-01-28 00:59:35.465907 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.465913 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.465919 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.465926 | orchestrator | 2026-01-28 00:59:35.465932 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-01-28 00:59:35.465938 | orchestrator | Wednesday 28 January 2026 00:56:24 +0000 (0:00:00.323) 0:08:07.782 ***** 2026-01-28 00:59:35.465944 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.465950 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.465956 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.465969 | orchestrator | 2026-01-28 00:59:35.465975 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-01-28 00:59:35.465981 | orchestrator | Wednesday 28 January 2026 00:56:25 +0000 (0:00:00.652) 0:08:08.434 ***** 2026-01-28 00:59:35.465987 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-28 00:59:35.465993 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-01-28 00:59:35.465999 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-01-28 00:59:35.466005 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-01-28 00:59:35.466011 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-01-28 00:59:35.466052 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-01-28 00:59:35.466059 | orchestrator | 2026-01-28 00:59:35.466065 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-01-28 00:59:35.466071 | orchestrator | Wednesday 28 January 2026 00:56:26 +0000 (0:00:01.004) 0:08:09.439 ***** 2026-01-28 00:59:35.466077 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-28 00:59:35.466083 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-01-28 00:59:35.466102 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-01-28 00:59:35.466109 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-01-28 00:59:35.466115 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-01-28 00:59:35.466121 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-28 00:59:35.466127 | orchestrator | 2026-01-28 00:59:35.466133 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-01-28 00:59:35.466139 | orchestrator | Wednesday 28 January 2026 00:56:28 +0000 (0:00:01.801) 0:08:11.241 ***** 2026-01-28 00:59:35.466145 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-01-28 00:59:35.466152 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-01-28 00:59:35.466158 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-01-28 00:59:35.466164 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-01-28 00:59:35.466170 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-01-28 00:59:35.466176 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-01-28 00:59:35.466182 | orchestrator | 2026-01-28 00:59:35.466188 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-01-28 00:59:35.466194 | orchestrator | Wednesday 28 January 2026 00:56:31 +0000 (0:00:03.169) 0:08:14.410 ***** 2026-01-28 00:59:35.466200 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.466206 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.466212 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-28 00:59:35.466218 | orchestrator | 2026-01-28 00:59:35.466225 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-01-28 00:59:35.466231 | orchestrator | Wednesday 28 January 2026 00:56:34 +0000 (0:00:03.314) 0:08:17.724 ***** 2026-01-28 00:59:35.466237 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.466243 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.466249 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-01-28 00:59:35.466255 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-01-28 00:59:35.466261 | orchestrator | 2026-01-28 00:59:35.466267 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-01-28 00:59:35.466273 | orchestrator | Wednesday 28 January 2026 00:56:46 +0000 (0:00:12.288) 0:08:30.012 ***** 2026-01-28 00:59:35.466280 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.466286 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.466292 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.466298 | orchestrator | 2026-01-28 00:59:35.466304 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-28 00:59:35.466310 | orchestrator | Wednesday 28 January 2026 00:56:48 +0000 (0:00:01.063) 0:08:31.076 ***** 2026-01-28 00:59:35.466317 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.466323 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.466334 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.466340 | orchestrator | 2026-01-28 00:59:35.466346 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-01-28 00:59:35.466353 | orchestrator | Wednesday 28 January 2026 00:56:48 +0000 (0:00:00.333) 0:08:31.410 ***** 2026-01-28 00:59:35.466359 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.466365 | orchestrator | 2026-01-28 00:59:35.466371 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-01-28 00:59:35.466377 | orchestrator | Wednesday 28 January 2026 00:56:48 +0000 (0:00:00.512) 0:08:31.922 ***** 2026-01-28 00:59:35.466383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-28 00:59:35.466389 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-28 00:59:35.466399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-28 00:59:35.466405 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.466411 | orchestrator | 2026-01-28 00:59:35.466417 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-01-28 00:59:35.466424 | orchestrator | Wednesday 28 January 2026 00:56:49 +0000 (0:00:01.045) 0:08:32.968 ***** 2026-01-28 00:59:35.466430 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.466436 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.466442 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.466449 | orchestrator | 2026-01-28 00:59:35.466455 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-01-28 00:59:35.466461 | orchestrator | Wednesday 28 January 2026 00:56:50 +0000 (0:00:00.366) 0:08:33.334 ***** 2026-01-28 00:59:35.466467 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.466473 | orchestrator | 2026-01-28 00:59:35.466482 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-01-28 00:59:35.466492 | orchestrator | Wednesday 28 January 2026 00:56:50 +0000 (0:00:00.260) 0:08:33.595 ***** 2026-01-28 00:59:35.466505 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.466521 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.466531 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.466541 | orchestrator | 2026-01-28 00:59:35.466550 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-01-28 00:59:35.466560 | orchestrator | Wednesday 28 January 2026 00:56:50 +0000 (0:00:00.308) 0:08:33.904 ***** 2026-01-28 00:59:35.466570 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.466581 | orchestrator | 2026-01-28 00:59:35.466592 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-01-28 00:59:35.466603 | orchestrator | Wednesday 28 January 2026 00:56:51 +0000 (0:00:00.244) 0:08:34.148 ***** 2026-01-28 00:59:35.466613 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.466625 | orchestrator | 2026-01-28 00:59:35.466635 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-01-28 00:59:35.466646 | orchestrator | Wednesday 28 January 2026 00:56:51 +0000 (0:00:00.235) 0:08:34.384 ***** 2026-01-28 00:59:35.466655 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.466661 | orchestrator | 2026-01-28 00:59:35.466667 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-01-28 00:59:35.466673 | orchestrator | Wednesday 28 January 2026 00:56:51 +0000 (0:00:00.129) 0:08:34.513 ***** 2026-01-28 00:59:35.466694 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.466701 | orchestrator | 2026-01-28 00:59:35.466707 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-01-28 00:59:35.466713 | orchestrator | Wednesday 28 January 2026 00:56:51 +0000 (0:00:00.251) 0:08:34.765 ***** 2026-01-28 00:59:35.466719 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.466725 | orchestrator | 2026-01-28 00:59:35.466731 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-01-28 00:59:35.466737 | orchestrator | Wednesday 28 January 2026 00:56:52 +0000 (0:00:00.903) 0:08:35.669 ***** 2026-01-28 00:59:35.466750 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-28 00:59:35.466756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-28 00:59:35.466763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-28 00:59:35.466774 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.466784 | orchestrator | 2026-01-28 00:59:35.466794 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-01-28 00:59:35.466805 | orchestrator | Wednesday 28 January 2026 00:56:53 +0000 (0:00:00.405) 0:08:36.074 ***** 2026-01-28 00:59:35.466815 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.466825 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.466833 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.466842 | orchestrator | 2026-01-28 00:59:35.466854 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-01-28 00:59:35.466882 | orchestrator | Wednesday 28 January 2026 00:56:53 +0000 (0:00:00.300) 0:08:36.375 ***** 2026-01-28 00:59:35.466893 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.466903 | orchestrator | 2026-01-28 00:59:35.466913 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-01-28 00:59:35.466923 | orchestrator | Wednesday 28 January 2026 00:56:53 +0000 (0:00:00.226) 0:08:36.602 ***** 2026-01-28 00:59:35.466934 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.466940 | orchestrator | 2026-01-28 00:59:35.466946 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-01-28 00:59:35.466952 | orchestrator | 2026-01-28 00:59:35.466958 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-28 00:59:35.466964 | orchestrator | Wednesday 28 January 2026 00:56:54 +0000 (0:00:00.902) 0:08:37.504 ***** 2026-01-28 00:59:35.466971 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.466981 | orchestrator | 2026-01-28 00:59:35.466991 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-28 00:59:35.467001 | orchestrator | Wednesday 28 January 2026 00:56:55 +0000 (0:00:01.230) 0:08:38.735 ***** 2026-01-28 00:59:35.467010 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.467020 | orchestrator | 2026-01-28 00:59:35.467030 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-28 00:59:35.467040 | orchestrator | Wednesday 28 January 2026 00:56:56 +0000 (0:00:01.007) 0:08:39.742 ***** 2026-01-28 00:59:35.467050 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.467059 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.467067 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.467077 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.467087 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.467097 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.467108 | orchestrator | 2026-01-28 00:59:35.467124 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-28 00:59:35.467135 | orchestrator | Wednesday 28 January 2026 00:56:57 +0000 (0:00:01.147) 0:08:40.890 ***** 2026-01-28 00:59:35.467145 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.467156 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.467166 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.467176 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.467183 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.467189 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.467195 | orchestrator | 2026-01-28 00:59:35.467201 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-28 00:59:35.467207 | orchestrator | Wednesday 28 January 2026 00:56:58 +0000 (0:00:00.694) 0:08:41.584 ***** 2026-01-28 00:59:35.467215 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.467233 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.467243 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.467253 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.467264 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.467274 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.467285 | orchestrator | 2026-01-28 00:59:35.467295 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-28 00:59:35.467306 | orchestrator | Wednesday 28 January 2026 00:56:59 +0000 (0:00:00.998) 0:08:42.583 ***** 2026-01-28 00:59:35.467316 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.467326 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.467336 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.467346 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.467356 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.467367 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.467377 | orchestrator | 2026-01-28 00:59:35.467387 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-28 00:59:35.467394 | orchestrator | Wednesday 28 January 2026 00:57:00 +0000 (0:00:00.639) 0:08:43.223 ***** 2026-01-28 00:59:35.467400 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.467406 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.467412 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.467418 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.467424 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.467430 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.467436 | orchestrator | 2026-01-28 00:59:35.467442 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-28 00:59:35.467462 | orchestrator | Wednesday 28 January 2026 00:57:01 +0000 (0:00:01.276) 0:08:44.499 ***** 2026-01-28 00:59:35.467469 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.467475 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.467481 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.467487 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.467493 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.467499 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.467505 | orchestrator | 2026-01-28 00:59:35.467511 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-28 00:59:35.467518 | orchestrator | Wednesday 28 January 2026 00:57:02 +0000 (0:00:00.605) 0:08:45.104 ***** 2026-01-28 00:59:35.467523 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.467529 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.467535 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.467541 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.467547 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.467553 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.467559 | orchestrator | 2026-01-28 00:59:35.467565 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-28 00:59:35.467572 | orchestrator | Wednesday 28 January 2026 00:57:02 +0000 (0:00:00.875) 0:08:45.980 ***** 2026-01-28 00:59:35.467578 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.467584 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.467590 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.467596 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.467602 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.467608 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.467614 | orchestrator | 2026-01-28 00:59:35.467620 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-28 00:59:35.467626 | orchestrator | Wednesday 28 January 2026 00:57:03 +0000 (0:00:00.964) 0:08:46.945 ***** 2026-01-28 00:59:35.467632 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.467638 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.467644 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.467650 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.467656 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.467662 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.467673 | orchestrator | 2026-01-28 00:59:35.467679 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-28 00:59:35.467685 | orchestrator | Wednesday 28 January 2026 00:57:05 +0000 (0:00:01.342) 0:08:48.287 ***** 2026-01-28 00:59:35.467692 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.467698 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.467704 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.467710 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.467716 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.467722 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.467728 | orchestrator | 2026-01-28 00:59:35.467734 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-28 00:59:35.467740 | orchestrator | Wednesday 28 January 2026 00:57:05 +0000 (0:00:00.599) 0:08:48.887 ***** 2026-01-28 00:59:35.467746 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.467752 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.467758 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.467764 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.467770 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.467776 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.467782 | orchestrator | 2026-01-28 00:59:35.467788 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-28 00:59:35.467794 | orchestrator | Wednesday 28 January 2026 00:57:06 +0000 (0:00:00.884) 0:08:49.771 ***** 2026-01-28 00:59:35.467800 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.467806 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.467812 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.467818 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.467825 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.467830 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.467836 | orchestrator | 2026-01-28 00:59:35.467847 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-28 00:59:35.467853 | orchestrator | Wednesday 28 January 2026 00:57:07 +0000 (0:00:00.560) 0:08:50.332 ***** 2026-01-28 00:59:35.467897 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.467909 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.467919 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.467928 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.467939 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.467948 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.467959 | orchestrator | 2026-01-28 00:59:35.467965 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-28 00:59:35.467971 | orchestrator | Wednesday 28 January 2026 00:57:07 +0000 (0:00:00.657) 0:08:50.989 ***** 2026-01-28 00:59:35.467978 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.467984 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.467990 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.467996 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.468002 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.468008 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.468014 | orchestrator | 2026-01-28 00:59:35.468020 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-28 00:59:35.468026 | orchestrator | Wednesday 28 January 2026 00:57:08 +0000 (0:00:00.570) 0:08:51.560 ***** 2026-01-28 00:59:35.468032 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.468038 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.468044 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.468050 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.468056 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.468062 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.468068 | orchestrator | 2026-01-28 00:59:35.468074 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-28 00:59:35.468080 | orchestrator | Wednesday 28 January 2026 00:57:09 +0000 (0:00:00.663) 0:08:52.224 ***** 2026-01-28 00:59:35.468092 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.468098 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.468104 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.468110 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.468116 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.468122 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.468128 | orchestrator | 2026-01-28 00:59:35.468134 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-28 00:59:35.468152 | orchestrator | Wednesday 28 January 2026 00:57:09 +0000 (0:00:00.561) 0:08:52.786 ***** 2026-01-28 00:59:35.468159 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.468165 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.468171 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.468177 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.468184 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.468190 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.468196 | orchestrator | 2026-01-28 00:59:35.468202 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-28 00:59:35.468209 | orchestrator | Wednesday 28 January 2026 00:57:10 +0000 (0:00:00.705) 0:08:53.491 ***** 2026-01-28 00:59:35.468215 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.468221 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.468227 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.468233 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.468239 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.468245 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.468251 | orchestrator | 2026-01-28 00:59:35.468258 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-28 00:59:35.468264 | orchestrator | Wednesday 28 January 2026 00:57:11 +0000 (0:00:00.624) 0:08:54.116 ***** 2026-01-28 00:59:35.468270 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.468276 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.468282 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.468288 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.468294 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.468301 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.468307 | orchestrator | 2026-01-28 00:59:35.468313 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-01-28 00:59:35.468319 | orchestrator | Wednesday 28 January 2026 00:57:12 +0000 (0:00:01.296) 0:08:55.413 ***** 2026-01-28 00:59:35.468326 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-28 00:59:35.468332 | orchestrator | 2026-01-28 00:59:35.468338 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-01-28 00:59:35.468344 | orchestrator | Wednesday 28 January 2026 00:57:16 +0000 (0:00:04.215) 0:08:59.628 ***** 2026-01-28 00:59:35.468350 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-28 00:59:35.468356 | orchestrator | 2026-01-28 00:59:35.468362 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-01-28 00:59:35.468368 | orchestrator | Wednesday 28 January 2026 00:57:18 +0000 (0:00:01.941) 0:09:01.569 ***** 2026-01-28 00:59:35.468375 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.468381 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.468387 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.468393 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.468399 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.468405 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.468411 | orchestrator | 2026-01-28 00:59:35.468417 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-01-28 00:59:35.468423 | orchestrator | Wednesday 28 January 2026 00:57:20 +0000 (0:00:02.047) 0:09:03.617 ***** 2026-01-28 00:59:35.468429 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.468436 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.468442 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.468448 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.468460 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.468466 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.468472 | orchestrator | 2026-01-28 00:59:35.468479 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-01-28 00:59:35.468485 | orchestrator | Wednesday 28 January 2026 00:57:21 +0000 (0:00:01.061) 0:09:04.678 ***** 2026-01-28 00:59:35.468495 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.468502 | orchestrator | 2026-01-28 00:59:35.468509 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-01-28 00:59:35.468515 | orchestrator | Wednesday 28 January 2026 00:57:22 +0000 (0:00:01.331) 0:09:06.010 ***** 2026-01-28 00:59:35.468521 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.468528 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.468534 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.468540 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.468546 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.468552 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.468558 | orchestrator | 2026-01-28 00:59:35.468564 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-01-28 00:59:35.468570 | orchestrator | Wednesday 28 January 2026 00:57:25 +0000 (0:00:02.096) 0:09:08.107 ***** 2026-01-28 00:59:35.468577 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.468583 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.468589 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.468595 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.468601 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.468607 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.468613 | orchestrator | 2026-01-28 00:59:35.468619 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-01-28 00:59:35.468626 | orchestrator | Wednesday 28 January 2026 00:57:28 +0000 (0:00:03.681) 0:09:11.789 ***** 2026-01-28 00:59:35.468632 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.468638 | orchestrator | 2026-01-28 00:59:35.468645 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-01-28 00:59:35.468651 | orchestrator | Wednesday 28 January 2026 00:57:30 +0000 (0:00:01.589) 0:09:13.378 ***** 2026-01-28 00:59:35.468657 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.468663 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.468669 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.468676 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.468682 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.468688 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.468694 | orchestrator | 2026-01-28 00:59:35.468700 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-01-28 00:59:35.468718 | orchestrator | Wednesday 28 January 2026 00:57:31 +0000 (0:00:00.892) 0:09:14.270 ***** 2026-01-28 00:59:35.468724 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.468730 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.468736 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.468742 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.468748 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.468754 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.468760 | orchestrator | 2026-01-28 00:59:35.468766 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-01-28 00:59:35.468773 | orchestrator | Wednesday 28 January 2026 00:57:33 +0000 (0:00:02.701) 0:09:16.972 ***** 2026-01-28 00:59:35.468779 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.468785 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.468791 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.468797 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.468808 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.468814 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.468820 | orchestrator | 2026-01-28 00:59:35.468826 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-01-28 00:59:35.468832 | orchestrator | 2026-01-28 00:59:35.468838 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-28 00:59:35.468844 | orchestrator | Wednesday 28 January 2026 00:57:35 +0000 (0:00:01.170) 0:09:18.142 ***** 2026-01-28 00:59:35.468851 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.468857 | orchestrator | 2026-01-28 00:59:35.468882 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-28 00:59:35.468889 | orchestrator | Wednesday 28 January 2026 00:57:35 +0000 (0:00:00.671) 0:09:18.814 ***** 2026-01-28 00:59:35.468895 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.468901 | orchestrator | 2026-01-28 00:59:35.468907 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-28 00:59:35.468913 | orchestrator | Wednesday 28 January 2026 00:57:36 +0000 (0:00:00.922) 0:09:19.736 ***** 2026-01-28 00:59:35.468920 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.468926 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.468932 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.468938 | orchestrator | 2026-01-28 00:59:35.468944 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-28 00:59:35.468950 | orchestrator | Wednesday 28 January 2026 00:57:36 +0000 (0:00:00.303) 0:09:20.040 ***** 2026-01-28 00:59:35.468956 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.468962 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.468968 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.468974 | orchestrator | 2026-01-28 00:59:35.468980 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-28 00:59:35.468986 | orchestrator | Wednesday 28 January 2026 00:57:37 +0000 (0:00:00.625) 0:09:20.665 ***** 2026-01-28 00:59:35.468992 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.468999 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.469005 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.469011 | orchestrator | 2026-01-28 00:59:35.469017 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-28 00:59:35.469023 | orchestrator | Wednesday 28 January 2026 00:57:38 +0000 (0:00:00.862) 0:09:21.528 ***** 2026-01-28 00:59:35.469029 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.469035 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.469041 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.469048 | orchestrator | 2026-01-28 00:59:35.469054 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-28 00:59:35.469063 | orchestrator | Wednesday 28 January 2026 00:57:39 +0000 (0:00:00.629) 0:09:22.157 ***** 2026-01-28 00:59:35.469070 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.469076 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.469082 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.469088 | orchestrator | 2026-01-28 00:59:35.469094 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-28 00:59:35.469100 | orchestrator | Wednesday 28 January 2026 00:57:39 +0000 (0:00:00.291) 0:09:22.449 ***** 2026-01-28 00:59:35.469106 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.469112 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.469118 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.469125 | orchestrator | 2026-01-28 00:59:35.469131 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-28 00:59:35.469137 | orchestrator | Wednesday 28 January 2026 00:57:39 +0000 (0:00:00.283) 0:09:22.732 ***** 2026-01-28 00:59:35.469143 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.469149 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.469159 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.469166 | orchestrator | 2026-01-28 00:59:35.469172 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-28 00:59:35.469178 | orchestrator | Wednesday 28 January 2026 00:57:40 +0000 (0:00:00.449) 0:09:23.181 ***** 2026-01-28 00:59:35.469184 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.469190 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.469197 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.469203 | orchestrator | 2026-01-28 00:59:35.469209 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-28 00:59:35.469215 | orchestrator | Wednesday 28 January 2026 00:57:40 +0000 (0:00:00.721) 0:09:23.903 ***** 2026-01-28 00:59:35.469221 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.469227 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.469233 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.469239 | orchestrator | 2026-01-28 00:59:35.469245 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-28 00:59:35.469251 | orchestrator | Wednesday 28 January 2026 00:57:41 +0000 (0:00:00.731) 0:09:24.634 ***** 2026-01-28 00:59:35.469257 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.469263 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.469269 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.469275 | orchestrator | 2026-01-28 00:59:35.469282 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-28 00:59:35.469300 | orchestrator | Wednesday 28 January 2026 00:57:41 +0000 (0:00:00.317) 0:09:24.952 ***** 2026-01-28 00:59:35.469306 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.469312 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.469318 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.469324 | orchestrator | 2026-01-28 00:59:35.469331 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-28 00:59:35.469337 | orchestrator | Wednesday 28 January 2026 00:57:42 +0000 (0:00:00.530) 0:09:25.482 ***** 2026-01-28 00:59:35.469343 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.469349 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.469355 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.469361 | orchestrator | 2026-01-28 00:59:35.469367 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-28 00:59:35.469373 | orchestrator | Wednesday 28 January 2026 00:57:42 +0000 (0:00:00.317) 0:09:25.799 ***** 2026-01-28 00:59:35.469380 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.469386 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.469392 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.469398 | orchestrator | 2026-01-28 00:59:35.469404 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-28 00:59:35.469410 | orchestrator | Wednesday 28 January 2026 00:57:43 +0000 (0:00:00.305) 0:09:26.104 ***** 2026-01-28 00:59:35.469416 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.469422 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.469428 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.469434 | orchestrator | 2026-01-28 00:59:35.469440 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-28 00:59:35.469447 | orchestrator | Wednesday 28 January 2026 00:57:43 +0000 (0:00:00.306) 0:09:26.411 ***** 2026-01-28 00:59:35.469453 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.469459 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.469465 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.469471 | orchestrator | 2026-01-28 00:59:35.469477 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-28 00:59:35.469483 | orchestrator | Wednesday 28 January 2026 00:57:43 +0000 (0:00:00.630) 0:09:27.042 ***** 2026-01-28 00:59:35.469489 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.469495 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.469501 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.469507 | orchestrator | 2026-01-28 00:59:35.469568 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-28 00:59:35.469574 | orchestrator | Wednesday 28 January 2026 00:57:44 +0000 (0:00:00.277) 0:09:27.320 ***** 2026-01-28 00:59:35.469580 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.469587 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.469593 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.469599 | orchestrator | 2026-01-28 00:59:35.469605 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-28 00:59:35.469611 | orchestrator | Wednesday 28 January 2026 00:57:44 +0000 (0:00:00.282) 0:09:27.603 ***** 2026-01-28 00:59:35.469617 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.469623 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.469630 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.469636 | orchestrator | 2026-01-28 00:59:35.469642 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-28 00:59:35.469648 | orchestrator | Wednesday 28 January 2026 00:57:44 +0000 (0:00:00.291) 0:09:27.894 ***** 2026-01-28 00:59:35.469654 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.469660 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.469666 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.469672 | orchestrator | 2026-01-28 00:59:35.469678 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-01-28 00:59:35.469684 | orchestrator | Wednesday 28 January 2026 00:57:45 +0000 (0:00:00.741) 0:09:28.636 ***** 2026-01-28 00:59:35.469697 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.469703 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.469709 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-01-28 00:59:35.469715 | orchestrator | 2026-01-28 00:59:35.469722 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-01-28 00:59:35.469728 | orchestrator | Wednesday 28 January 2026 00:57:45 +0000 (0:00:00.352) 0:09:28.988 ***** 2026-01-28 00:59:35.469734 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-28 00:59:35.469740 | orchestrator | 2026-01-28 00:59:35.469746 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-01-28 00:59:35.469752 | orchestrator | Wednesday 28 January 2026 00:57:47 +0000 (0:00:01.848) 0:09:30.836 ***** 2026-01-28 00:59:35.469760 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-01-28 00:59:35.469768 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.469774 | orchestrator | 2026-01-28 00:59:35.469780 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-01-28 00:59:35.469786 | orchestrator | Wednesday 28 January 2026 00:57:47 +0000 (0:00:00.188) 0:09:31.024 ***** 2026-01-28 00:59:35.469794 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-28 00:59:35.469806 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-28 00:59:35.469812 | orchestrator | 2026-01-28 00:59:35.469830 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-01-28 00:59:35.469836 | orchestrator | Wednesday 28 January 2026 00:57:56 +0000 (0:00:08.082) 0:09:39.106 ***** 2026-01-28 00:59:35.469842 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-28 00:59:35.469849 | orchestrator | 2026-01-28 00:59:35.469855 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-01-28 00:59:35.469881 | orchestrator | Wednesday 28 January 2026 00:57:59 +0000 (0:00:03.283) 0:09:42.389 ***** 2026-01-28 00:59:35.469888 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.469894 | orchestrator | 2026-01-28 00:59:35.469900 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-01-28 00:59:35.469906 | orchestrator | Wednesday 28 January 2026 00:57:59 +0000 (0:00:00.542) 0:09:42.932 ***** 2026-01-28 00:59:35.469913 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-28 00:59:35.469919 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-28 00:59:35.469925 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-01-28 00:59:35.469931 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-01-28 00:59:35.469937 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-01-28 00:59:35.469943 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-01-28 00:59:35.469949 | orchestrator | 2026-01-28 00:59:35.469955 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-01-28 00:59:35.469961 | orchestrator | Wednesday 28 January 2026 00:58:00 +0000 (0:00:01.007) 0:09:43.940 ***** 2026-01-28 00:59:35.469967 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 00:59:35.469973 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-28 00:59:35.469979 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-28 00:59:35.469985 | orchestrator | 2026-01-28 00:59:35.469992 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-01-28 00:59:35.469998 | orchestrator | Wednesday 28 January 2026 00:58:03 +0000 (0:00:02.233) 0:09:46.173 ***** 2026-01-28 00:59:35.470004 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-28 00:59:35.470010 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-28 00:59:35.470039 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.470047 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-28 00:59:35.470053 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-28 00:59:35.470059 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.470065 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-28 00:59:35.470072 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-28 00:59:35.470078 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.470084 | orchestrator | 2026-01-28 00:59:35.470090 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-01-28 00:59:35.470096 | orchestrator | Wednesday 28 January 2026 00:58:04 +0000 (0:00:01.411) 0:09:47.585 ***** 2026-01-28 00:59:35.470102 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.470108 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.470114 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.470120 | orchestrator | 2026-01-28 00:59:35.470126 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-01-28 00:59:35.470132 | orchestrator | Wednesday 28 January 2026 00:58:07 +0000 (0:00:02.842) 0:09:50.428 ***** 2026-01-28 00:59:35.470142 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.470148 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.470154 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.470160 | orchestrator | 2026-01-28 00:59:35.470166 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-01-28 00:59:35.470172 | orchestrator | Wednesday 28 January 2026 00:58:07 +0000 (0:00:00.302) 0:09:50.730 ***** 2026-01-28 00:59:35.470178 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.470185 | orchestrator | 2026-01-28 00:59:35.470190 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-01-28 00:59:35.470197 | orchestrator | Wednesday 28 January 2026 00:58:08 +0000 (0:00:00.907) 0:09:51.637 ***** 2026-01-28 00:59:35.470207 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.470213 | orchestrator | 2026-01-28 00:59:35.470220 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-01-28 00:59:35.470226 | orchestrator | Wednesday 28 January 2026 00:58:09 +0000 (0:00:00.558) 0:09:52.196 ***** 2026-01-28 00:59:35.470232 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.470238 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.470244 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.470250 | orchestrator | 2026-01-28 00:59:35.470256 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-01-28 00:59:35.470262 | orchestrator | Wednesday 28 January 2026 00:58:10 +0000 (0:00:01.260) 0:09:53.457 ***** 2026-01-28 00:59:35.470268 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.470274 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.470280 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.470286 | orchestrator | 2026-01-28 00:59:35.470292 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-01-28 00:59:35.470299 | orchestrator | Wednesday 28 January 2026 00:58:11 +0000 (0:00:01.471) 0:09:54.928 ***** 2026-01-28 00:59:35.470305 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.470311 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.470317 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.470323 | orchestrator | 2026-01-28 00:59:35.470329 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-01-28 00:59:35.470347 | orchestrator | Wednesday 28 January 2026 00:58:13 +0000 (0:00:01.700) 0:09:56.629 ***** 2026-01-28 00:59:35.470354 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.470360 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.470366 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.470372 | orchestrator | 2026-01-28 00:59:35.470378 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-01-28 00:59:35.470384 | orchestrator | Wednesday 28 January 2026 00:58:15 +0000 (0:00:01.959) 0:09:58.589 ***** 2026-01-28 00:59:35.470390 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.470397 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.470403 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.470409 | orchestrator | 2026-01-28 00:59:35.470415 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-28 00:59:35.470421 | orchestrator | Wednesday 28 January 2026 00:58:17 +0000 (0:00:01.572) 0:10:00.162 ***** 2026-01-28 00:59:35.470427 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.470433 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.470439 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.470445 | orchestrator | 2026-01-28 00:59:35.470451 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-01-28 00:59:35.470457 | orchestrator | Wednesday 28 January 2026 00:58:17 +0000 (0:00:00.726) 0:10:00.889 ***** 2026-01-28 00:59:35.470464 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.470470 | orchestrator | 2026-01-28 00:59:35.470476 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-01-28 00:59:35.470482 | orchestrator | Wednesday 28 January 2026 00:58:18 +0000 (0:00:00.785) 0:10:01.674 ***** 2026-01-28 00:59:35.470488 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.470494 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.470500 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.470506 | orchestrator | 2026-01-28 00:59:35.470512 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-01-28 00:59:35.470519 | orchestrator | Wednesday 28 January 2026 00:58:19 +0000 (0:00:00.406) 0:10:02.081 ***** 2026-01-28 00:59:35.470525 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.470531 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.470541 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.470547 | orchestrator | 2026-01-28 00:59:35.470553 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-01-28 00:59:35.470560 | orchestrator | Wednesday 28 January 2026 00:58:20 +0000 (0:00:01.228) 0:10:03.309 ***** 2026-01-28 00:59:35.470566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-28 00:59:35.470572 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-28 00:59:35.470578 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-28 00:59:35.470584 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.470590 | orchestrator | 2026-01-28 00:59:35.470596 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-01-28 00:59:35.470602 | orchestrator | Wednesday 28 January 2026 00:58:21 +0000 (0:00:00.884) 0:10:04.193 ***** 2026-01-28 00:59:35.470609 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.470615 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.470621 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.470627 | orchestrator | 2026-01-28 00:59:35.470633 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-28 00:59:35.470639 | orchestrator | 2026-01-28 00:59:35.470645 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-01-28 00:59:35.470651 | orchestrator | Wednesday 28 January 2026 00:58:22 +0000 (0:00:00.909) 0:10:05.103 ***** 2026-01-28 00:59:35.470661 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.470667 | orchestrator | 2026-01-28 00:59:35.470673 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-01-28 00:59:35.470679 | orchestrator | Wednesday 28 January 2026 00:58:22 +0000 (0:00:00.566) 0:10:05.670 ***** 2026-01-28 00:59:35.470685 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.470691 | orchestrator | 2026-01-28 00:59:35.470698 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-01-28 00:59:35.470704 | orchestrator | Wednesday 28 January 2026 00:58:23 +0000 (0:00:00.898) 0:10:06.569 ***** 2026-01-28 00:59:35.470710 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.470716 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.470722 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.470728 | orchestrator | 2026-01-28 00:59:35.470734 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-01-28 00:59:35.470740 | orchestrator | Wednesday 28 January 2026 00:58:23 +0000 (0:00:00.356) 0:10:06.926 ***** 2026-01-28 00:59:35.470746 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.470752 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.470758 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.470764 | orchestrator | 2026-01-28 00:59:35.470770 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-01-28 00:59:35.470776 | orchestrator | Wednesday 28 January 2026 00:58:24 +0000 (0:00:00.702) 0:10:07.628 ***** 2026-01-28 00:59:35.470782 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.470788 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.470794 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.470800 | orchestrator | 2026-01-28 00:59:35.470807 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-01-28 00:59:35.470813 | orchestrator | Wednesday 28 January 2026 00:58:25 +0000 (0:00:01.044) 0:10:08.672 ***** 2026-01-28 00:59:35.470819 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.470825 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.470831 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.470837 | orchestrator | 2026-01-28 00:59:35.470843 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-01-28 00:59:35.470849 | orchestrator | Wednesday 28 January 2026 00:58:26 +0000 (0:00:00.707) 0:10:09.380 ***** 2026-01-28 00:59:35.470855 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.470887 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.470894 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.470901 | orchestrator | 2026-01-28 00:59:35.470907 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-01-28 00:59:35.470913 | orchestrator | Wednesday 28 January 2026 00:58:26 +0000 (0:00:00.332) 0:10:09.713 ***** 2026-01-28 00:59:35.470919 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.470925 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.470931 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.470937 | orchestrator | 2026-01-28 00:59:35.470943 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-01-28 00:59:35.470949 | orchestrator | Wednesday 28 January 2026 00:58:26 +0000 (0:00:00.319) 0:10:10.033 ***** 2026-01-28 00:59:35.470955 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.470961 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.470967 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.470973 | orchestrator | 2026-01-28 00:59:35.470980 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-01-28 00:59:35.470986 | orchestrator | Wednesday 28 January 2026 00:58:27 +0000 (0:00:00.386) 0:10:10.419 ***** 2026-01-28 00:59:35.470992 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.470998 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.471004 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.471010 | orchestrator | 2026-01-28 00:59:35.471016 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-01-28 00:59:35.471022 | orchestrator | Wednesday 28 January 2026 00:58:28 +0000 (0:00:01.081) 0:10:11.501 ***** 2026-01-28 00:59:35.471028 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.471034 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.471040 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.471046 | orchestrator | 2026-01-28 00:59:35.471052 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-01-28 00:59:35.471058 | orchestrator | Wednesday 28 January 2026 00:58:29 +0000 (0:00:00.736) 0:10:12.237 ***** 2026-01-28 00:59:35.471065 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.471071 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.471077 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.471083 | orchestrator | 2026-01-28 00:59:35.471090 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-01-28 00:59:35.471096 | orchestrator | Wednesday 28 January 2026 00:58:29 +0000 (0:00:00.302) 0:10:12.540 ***** 2026-01-28 00:59:35.471102 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.471108 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.471114 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.471120 | orchestrator | 2026-01-28 00:59:35.471126 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-01-28 00:59:35.471132 | orchestrator | Wednesday 28 January 2026 00:58:29 +0000 (0:00:00.305) 0:10:12.846 ***** 2026-01-28 00:59:35.471138 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.471144 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.471151 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.471157 | orchestrator | 2026-01-28 00:59:35.471163 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-01-28 00:59:35.471169 | orchestrator | Wednesday 28 January 2026 00:58:30 +0000 (0:00:00.627) 0:10:13.473 ***** 2026-01-28 00:59:35.471175 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.471181 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.471187 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.471193 | orchestrator | 2026-01-28 00:59:35.471199 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-01-28 00:59:35.471205 | orchestrator | Wednesday 28 January 2026 00:58:30 +0000 (0:00:00.340) 0:10:13.814 ***** 2026-01-28 00:59:35.471211 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.471217 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.471228 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.471234 | orchestrator | 2026-01-28 00:59:35.471240 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-01-28 00:59:35.471246 | orchestrator | Wednesday 28 January 2026 00:58:31 +0000 (0:00:00.330) 0:10:14.144 ***** 2026-01-28 00:59:35.471252 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.471258 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.471264 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.471271 | orchestrator | 2026-01-28 00:59:35.471277 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-01-28 00:59:35.471283 | orchestrator | Wednesday 28 January 2026 00:58:31 +0000 (0:00:00.323) 0:10:14.468 ***** 2026-01-28 00:59:35.471289 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.471295 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.471301 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.471307 | orchestrator | 2026-01-28 00:59:35.471313 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-01-28 00:59:35.471319 | orchestrator | Wednesday 28 January 2026 00:58:32 +0000 (0:00:00.600) 0:10:15.069 ***** 2026-01-28 00:59:35.471325 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.471331 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.471338 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.471344 | orchestrator | 2026-01-28 00:59:35.471350 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-01-28 00:59:35.471356 | orchestrator | Wednesday 28 January 2026 00:58:32 +0000 (0:00:00.323) 0:10:15.392 ***** 2026-01-28 00:59:35.471362 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.471368 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.471374 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.471380 | orchestrator | 2026-01-28 00:59:35.471387 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-01-28 00:59:35.471393 | orchestrator | Wednesday 28 January 2026 00:58:32 +0000 (0:00:00.384) 0:10:15.777 ***** 2026-01-28 00:59:35.471399 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.471405 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.471411 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.471417 | orchestrator | 2026-01-28 00:59:35.471423 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-01-28 00:59:35.471429 | orchestrator | Wednesday 28 January 2026 00:58:33 +0000 (0:00:00.785) 0:10:16.562 ***** 2026-01-28 00:59:35.471441 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.471448 | orchestrator | 2026-01-28 00:59:35.471454 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-28 00:59:35.471460 | orchestrator | Wednesday 28 January 2026 00:58:33 +0000 (0:00:00.475) 0:10:17.038 ***** 2026-01-28 00:59:35.471466 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 00:59:35.471472 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-28 00:59:35.471479 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-28 00:59:35.471485 | orchestrator | 2026-01-28 00:59:35.471491 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-28 00:59:35.471497 | orchestrator | Wednesday 28 January 2026 00:58:35 +0000 (0:00:01.920) 0:10:18.958 ***** 2026-01-28 00:59:35.471503 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-28 00:59:35.471510 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-01-28 00:59:35.471516 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.471522 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-28 00:59:35.471528 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-01-28 00:59:35.471534 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.471540 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-28 00:59:35.471547 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-01-28 00:59:35.471553 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.471563 | orchestrator | 2026-01-28 00:59:35.471569 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-01-28 00:59:35.471575 | orchestrator | Wednesday 28 January 2026 00:58:37 +0000 (0:00:01.331) 0:10:20.290 ***** 2026-01-28 00:59:35.471581 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.471587 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.471594 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.471600 | orchestrator | 2026-01-28 00:59:35.471606 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-01-28 00:59:35.471612 | orchestrator | Wednesday 28 January 2026 00:58:37 +0000 (0:00:00.349) 0:10:20.640 ***** 2026-01-28 00:59:35.471618 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.471625 | orchestrator | 2026-01-28 00:59:35.471631 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-01-28 00:59:35.471637 | orchestrator | Wednesday 28 January 2026 00:58:38 +0000 (0:00:00.530) 0:10:21.170 ***** 2026-01-28 00:59:35.471643 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-28 00:59:35.471650 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-28 00:59:35.471656 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-28 00:59:35.471662 | orchestrator | 2026-01-28 00:59:35.471669 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-01-28 00:59:35.471675 | orchestrator | Wednesday 28 January 2026 00:58:39 +0000 (0:00:01.295) 0:10:22.466 ***** 2026-01-28 00:59:35.471734 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 00:59:35.471750 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-28 00:59:35.471757 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 00:59:35.471763 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-28 00:59:35.471769 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 00:59:35.471775 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-01-28 00:59:35.471782 | orchestrator | 2026-01-28 00:59:35.471788 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-01-28 00:59:35.471795 | orchestrator | Wednesday 28 January 2026 00:58:43 +0000 (0:00:04.433) 0:10:26.899 ***** 2026-01-28 00:59:35.471801 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 00:59:35.471807 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-28 00:59:35.471813 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 00:59:35.471820 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-28 00:59:35.471826 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 00:59:35.471832 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-28 00:59:35.471838 | orchestrator | 2026-01-28 00:59:35.471844 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-01-28 00:59:35.471851 | orchestrator | Wednesday 28 January 2026 00:58:45 +0000 (0:00:02.002) 0:10:28.902 ***** 2026-01-28 00:59:35.471857 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-28 00:59:35.471900 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.471914 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-28 00:59:35.471920 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.471926 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-28 00:59:35.471933 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.471939 | orchestrator | 2026-01-28 00:59:35.471951 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-01-28 00:59:35.471958 | orchestrator | Wednesday 28 January 2026 00:58:46 +0000 (0:00:01.114) 0:10:30.016 ***** 2026-01-28 00:59:35.471964 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-01-28 00:59:35.471970 | orchestrator | 2026-01-28 00:59:35.471976 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-01-28 00:59:35.471982 | orchestrator | Wednesday 28 January 2026 00:58:47 +0000 (0:00:00.228) 0:10:30.245 ***** 2026-01-28 00:59:35.471989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-28 00:59:35.471995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-28 00:59:35.472002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-28 00:59:35.472008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-28 00:59:35.472014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-28 00:59:35.472021 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.472027 | orchestrator | 2026-01-28 00:59:35.472033 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-01-28 00:59:35.472039 | orchestrator | Wednesday 28 January 2026 00:58:48 +0000 (0:00:01.081) 0:10:31.327 ***** 2026-01-28 00:59:35.472046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-28 00:59:35.472052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-28 00:59:35.472058 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-28 00:59:35.472064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-28 00:59:35.472070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-01-28 00:59:35.472076 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.472083 | orchestrator | 2026-01-28 00:59:35.472089 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-01-28 00:59:35.472095 | orchestrator | Wednesday 28 January 2026 00:58:48 +0000 (0:00:00.605) 0:10:31.932 ***** 2026-01-28 00:59:35.472101 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-28 00:59:35.472111 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-28 00:59:35.472117 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-28 00:59:35.472123 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-28 00:59:35.472130 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-01-28 00:59:35.472140 | orchestrator | 2026-01-28 00:59:35.472147 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-01-28 00:59:35.472153 | orchestrator | Wednesday 28 January 2026 00:59:19 +0000 (0:00:30.909) 0:11:02.842 ***** 2026-01-28 00:59:35.472159 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.472165 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.472171 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.472178 | orchestrator | 2026-01-28 00:59:35.472184 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-01-28 00:59:35.472190 | orchestrator | Wednesday 28 January 2026 00:59:20 +0000 (0:00:00.327) 0:11:03.169 ***** 2026-01-28 00:59:35.472196 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.472202 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.472208 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.472214 | orchestrator | 2026-01-28 00:59:35.472220 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-01-28 00:59:35.472227 | orchestrator | Wednesday 28 January 2026 00:59:20 +0000 (0:00:00.340) 0:11:03.510 ***** 2026-01-28 00:59:35.472233 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.472239 | orchestrator | 2026-01-28 00:59:35.472245 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-01-28 00:59:35.472251 | orchestrator | Wednesday 28 January 2026 00:59:21 +0000 (0:00:00.756) 0:11:04.266 ***** 2026-01-28 00:59:35.472261 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.472268 | orchestrator | 2026-01-28 00:59:35.472274 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-01-28 00:59:35.472280 | orchestrator | Wednesday 28 January 2026 00:59:21 +0000 (0:00:00.518) 0:11:04.785 ***** 2026-01-28 00:59:35.472286 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.472292 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.472298 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.472305 | orchestrator | 2026-01-28 00:59:35.472311 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-01-28 00:59:35.472317 | orchestrator | Wednesday 28 January 2026 00:59:22 +0000 (0:00:01.249) 0:11:06.035 ***** 2026-01-28 00:59:35.472323 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.472329 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.472335 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.472341 | orchestrator | 2026-01-28 00:59:35.472347 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-01-28 00:59:35.472353 | orchestrator | Wednesday 28 January 2026 00:59:24 +0000 (0:00:01.410) 0:11:07.445 ***** 2026-01-28 00:59:35.472359 | orchestrator | changed: [testbed-node-3] 2026-01-28 00:59:35.472365 | orchestrator | changed: [testbed-node-4] 2026-01-28 00:59:35.472371 | orchestrator | changed: [testbed-node-5] 2026-01-28 00:59:35.472377 | orchestrator | 2026-01-28 00:59:35.472383 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-01-28 00:59:35.472390 | orchestrator | Wednesday 28 January 2026 00:59:26 +0000 (0:00:02.273) 0:11:09.718 ***** 2026-01-28 00:59:35.472396 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-01-28 00:59:35.472402 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-01-28 00:59:35.472408 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-01-28 00:59:35.472414 | orchestrator | 2026-01-28 00:59:35.472420 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-01-28 00:59:35.472426 | orchestrator | Wednesday 28 January 2026 00:59:29 +0000 (0:00:02.672) 0:11:12.391 ***** 2026-01-28 00:59:35.472436 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.472443 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.472449 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.472455 | orchestrator | 2026-01-28 00:59:35.472461 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-01-28 00:59:35.472467 | orchestrator | Wednesday 28 January 2026 00:59:29 +0000 (0:00:00.361) 0:11:12.752 ***** 2026-01-28 00:59:35.472473 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 00:59:35.472479 | orchestrator | 2026-01-28 00:59:35.472485 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-01-28 00:59:35.472492 | orchestrator | Wednesday 28 January 2026 00:59:30 +0000 (0:00:00.607) 0:11:13.360 ***** 2026-01-28 00:59:35.472498 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.472504 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.472510 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.472516 | orchestrator | 2026-01-28 00:59:35.472522 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-01-28 00:59:35.472528 | orchestrator | Wednesday 28 January 2026 00:59:30 +0000 (0:00:00.565) 0:11:13.926 ***** 2026-01-28 00:59:35.472534 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.472543 | orchestrator | skipping: [testbed-node-4] 2026-01-28 00:59:35.472549 | orchestrator | skipping: [testbed-node-5] 2026-01-28 00:59:35.472555 | orchestrator | 2026-01-28 00:59:35.472561 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-01-28 00:59:35.472567 | orchestrator | Wednesday 28 January 2026 00:59:31 +0000 (0:00:00.348) 0:11:14.274 ***** 2026-01-28 00:59:35.472574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-28 00:59:35.472580 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-28 00:59:35.472586 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-28 00:59:35.472592 | orchestrator | skipping: [testbed-node-3] 2026-01-28 00:59:35.472598 | orchestrator | 2026-01-28 00:59:35.472604 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-01-28 00:59:35.472610 | orchestrator | Wednesday 28 January 2026 00:59:31 +0000 (0:00:00.620) 0:11:14.894 ***** 2026-01-28 00:59:35.472616 | orchestrator | ok: [testbed-node-3] 2026-01-28 00:59:35.472622 | orchestrator | ok: [testbed-node-4] 2026-01-28 00:59:35.472628 | orchestrator | ok: [testbed-node-5] 2026-01-28 00:59:35.472634 | orchestrator | 2026-01-28 00:59:35.472640 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:59:35.472647 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-01-28 00:59:35.472653 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-01-28 00:59:35.472660 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-01-28 00:59:35.472666 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-01-28 00:59:35.472672 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-01-28 00:59:35.472681 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-01-28 00:59:35.472688 | orchestrator | 2026-01-28 00:59:35.472694 | orchestrator | 2026-01-28 00:59:35.472700 | orchestrator | 2026-01-28 00:59:35.472706 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:59:35.472717 | orchestrator | Wednesday 28 January 2026 00:59:32 +0000 (0:00:00.255) 0:11:15.150 ***** 2026-01-28 00:59:35.472723 | orchestrator | =============================================================================== 2026-01-28 00:59:35.472729 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 45.22s 2026-01-28 00:59:35.472735 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.10s 2026-01-28 00:59:35.472741 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.12s 2026-01-28 00:59:35.472747 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.91s 2026-01-28 00:59:35.472753 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.83s 2026-01-28 00:59:35.472759 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.16s 2026-01-28 00:59:35.472765 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.29s 2026-01-28 00:59:35.472772 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.16s 2026-01-28 00:59:35.472777 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.00s 2026-01-28 00:59:35.472784 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.08s 2026-01-28 00:59:35.472790 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.64s 2026-01-28 00:59:35.472796 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.48s 2026-01-28 00:59:35.472802 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.35s 2026-01-28 00:59:35.472808 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.46s 2026-01-28 00:59:35.472814 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.43s 2026-01-28 00:59:35.472820 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.22s 2026-01-28 00:59:35.472826 | orchestrator | ceph-container-common : Enable ceph.target ------------------------------ 4.14s 2026-01-28 00:59:35.472832 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.93s 2026-01-28 00:59:35.472838 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.68s 2026-01-28 00:59:35.472844 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.31s 2026-01-28 00:59:35.472850 | orchestrator | 2026-01-28 00:59:35.472856 | orchestrator | 2026-01-28 00:59:35.472881 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 00:59:35.472887 | orchestrator | 2026-01-28 00:59:35.472894 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 00:59:35.472899 | orchestrator | Wednesday 28 January 2026 00:57:06 +0000 (0:00:00.275) 0:00:00.275 ***** 2026-01-28 00:59:35.472906 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.472912 | orchestrator | ok: [testbed-node-1] 2026-01-28 00:59:35.472918 | orchestrator | ok: [testbed-node-2] 2026-01-28 00:59:35.472924 | orchestrator | 2026-01-28 00:59:35.472930 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 00:59:35.472939 | orchestrator | Wednesday 28 January 2026 00:57:06 +0000 (0:00:00.290) 0:00:00.566 ***** 2026-01-28 00:59:35.472946 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-01-28 00:59:35.472952 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-01-28 00:59:35.472958 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-01-28 00:59:35.472964 | orchestrator | 2026-01-28 00:59:35.472970 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-01-28 00:59:35.472976 | orchestrator | 2026-01-28 00:59:35.472982 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-28 00:59:35.472989 | orchestrator | Wednesday 28 January 2026 00:57:07 +0000 (0:00:00.389) 0:00:00.956 ***** 2026-01-28 00:59:35.472995 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.473001 | orchestrator | 2026-01-28 00:59:35.473011 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-01-28 00:59:35.473018 | orchestrator | Wednesday 28 January 2026 00:57:07 +0000 (0:00:00.442) 0:00:01.398 ***** 2026-01-28 00:59:35.473024 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-28 00:59:35.473030 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-28 00:59:35.473036 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-01-28 00:59:35.473042 | orchestrator | 2026-01-28 00:59:35.473048 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-01-28 00:59:35.473054 | orchestrator | Wednesday 28 January 2026 00:57:08 +0000 (0:00:00.682) 0:00:02.081 ***** 2026-01-28 00:59:35.473068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-28 00:59:35.473078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-28 00:59:35.473088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-28 00:59:35.473099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-28 00:59:35.473115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-28 00:59:35.473123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-28 00:59:35.473130 | orchestrator | 2026-01-28 00:59:35.473136 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-28 00:59:35.473143 | orchestrator | Wednesday 28 January 2026 00:57:09 +0000 (0:00:01.569) 0:00:03.650 ***** 2026-01-28 00:59:35.473149 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.473155 | orchestrator | 2026-01-28 00:59:35.473161 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-01-28 00:59:35.473167 | orchestrator | Wednesday 28 January 2026 00:57:10 +0000 (0:00:00.470) 0:00:04.120 ***** 2026-01-28 00:59:35.473177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-28 00:59:35.473188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-28 00:59:35.473199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-28 00:59:35.473206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-28 00:59:35.473213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-28 00:59:35.473225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-28 00:59:35.473236 | orchestrator | 2026-01-28 00:59:35.473243 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-01-28 00:59:35.473249 | orchestrator | Wednesday 28 January 2026 00:57:13 +0000 (0:00:03.078) 0:00:07.199 ***** 2026-01-28 00:59:35.473259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-28 00:59:35.473266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-28 00:59:35.473273 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.473280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-28 00:59:35.473293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-28 00:59:35.473300 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.473310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-28 00:59:35.473317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-28 00:59:35.473324 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.473330 | orchestrator | 2026-01-28 00:59:35.473336 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-01-28 00:59:35.473342 | orchestrator | Wednesday 28 January 2026 00:57:14 +0000 (0:00:01.087) 0:00:08.286 ***** 2026-01-28 00:59:35.473349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-28 00:59:35.473364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-28 00:59:35.473371 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.473381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-28 00:59:35.473388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-28 00:59:35.473395 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.473401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-01-28 00:59:35.473415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-01-28 00:59:35.473422 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.473428 | orchestrator | 2026-01-28 00:59:35.473434 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-01-28 00:59:35.473440 | orchestrator | Wednesday 28 January 2026 00:57:15 +0000 (0:00:00.997) 0:00:09.283 ***** 2026-01-28 00:59:35.473450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-28 00:59:35.473457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-28 00:59:35.473463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-28 00:59:35.473477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-28 00:59:35.473484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-28 00:59:35.473496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-28 00:59:35.473503 | orchestrator | 2026-01-28 00:59:35.473509 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-01-28 00:59:35.473515 | orchestrator | Wednesday 28 January 2026 00:57:17 +0000 (0:00:02.217) 0:00:11.501 ***** 2026-01-28 00:59:35.473521 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.473527 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.473534 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.473544 | orchestrator | 2026-01-28 00:59:35.473550 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-01-28 00:59:35.473557 | orchestrator | Wednesday 28 January 2026 00:57:20 +0000 (0:00:03.330) 0:00:14.832 ***** 2026-01-28 00:59:35.473563 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.473569 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.473575 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.473581 | orchestrator | 2026-01-28 00:59:35.473587 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-01-28 00:59:35.473593 | orchestrator | Wednesday 28 January 2026 00:57:23 +0000 (0:00:02.462) 0:00:17.294 ***** 2026-01-28 00:59:35.473599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-28 00:59:35.473609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-28 00:59:35.473620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-01-28 00:59:35.473627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-28 00:59:35.473637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-28 00:59:35.473648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-01-28 00:59:35.473654 | orchestrator | 2026-01-28 00:59:35.473660 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-28 00:59:35.473667 | orchestrator | Wednesday 28 January 2026 00:57:25 +0000 (0:00:02.571) 0:00:19.865 ***** 2026-01-28 00:59:35.473673 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.473679 | orchestrator | skipping: [testbed-node-1] 2026-01-28 00:59:35.473686 | orchestrator | skipping: [testbed-node-2] 2026-01-28 00:59:35.473692 | orchestrator | 2026-01-28 00:59:35.473698 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-28 00:59:35.473704 | orchestrator | Wednesday 28 January 2026 00:57:26 +0000 (0:00:00.300) 0:00:20.166 ***** 2026-01-28 00:59:35.473710 | orchestrator | 2026-01-28 00:59:35.473716 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-28 00:59:35.473722 | orchestrator | Wednesday 28 January 2026 00:57:26 +0000 (0:00:00.068) 0:00:20.235 ***** 2026-01-28 00:59:35.473728 | orchestrator | 2026-01-28 00:59:35.473734 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-01-28 00:59:35.473740 | orchestrator | Wednesday 28 January 2026 00:57:26 +0000 (0:00:00.065) 0:00:20.300 ***** 2026-01-28 00:59:35.473747 | orchestrator | 2026-01-28 00:59:35.473756 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-01-28 00:59:35.473763 | orchestrator | Wednesday 28 January 2026 00:57:26 +0000 (0:00:00.066) 0:00:20.367 ***** 2026-01-28 00:59:35.473769 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.473775 | orchestrator | 2026-01-28 00:59:35.473788 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-01-28 00:59:35.473794 | orchestrator | Wednesday 28 January 2026 00:57:27 +0000 (0:00:00.700) 0:00:21.067 ***** 2026-01-28 00:59:35.473801 | orchestrator | skipping: [testbed-node-0] 2026-01-28 00:59:35.473807 | orchestrator | 2026-01-28 00:59:35.473813 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-01-28 00:59:35.473819 | orchestrator | Wednesday 28 January 2026 00:57:27 +0000 (0:00:00.333) 0:00:21.401 ***** 2026-01-28 00:59:35.473825 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.473831 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.473837 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.473844 | orchestrator | 2026-01-28 00:59:35.473850 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-01-28 00:59:35.473856 | orchestrator | Wednesday 28 January 2026 00:58:18 +0000 (0:00:50.621) 0:01:12.023 ***** 2026-01-28 00:59:35.473883 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.473894 | orchestrator | changed: [testbed-node-1] 2026-01-28 00:59:35.473905 | orchestrator | changed: [testbed-node-2] 2026-01-28 00:59:35.473915 | orchestrator | 2026-01-28 00:59:35.473925 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-01-28 00:59:35.473932 | orchestrator | Wednesday 28 January 2026 00:59:23 +0000 (0:01:05.337) 0:02:17.360 ***** 2026-01-28 00:59:35.473938 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 00:59:35.473944 | orchestrator | 2026-01-28 00:59:35.473950 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-01-28 00:59:35.473956 | orchestrator | Wednesday 28 January 2026 00:59:24 +0000 (0:00:00.667) 0:02:18.027 ***** 2026-01-28 00:59:35.473962 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.473968 | orchestrator | 2026-01-28 00:59:35.473975 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-01-28 00:59:35.473981 | orchestrator | Wednesday 28 January 2026 00:59:27 +0000 (0:00:03.096) 0:02:21.124 ***** 2026-01-28 00:59:35.473987 | orchestrator | ok: [testbed-node-0] 2026-01-28 00:59:35.473993 | orchestrator | 2026-01-28 00:59:35.473999 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-01-28 00:59:35.474005 | orchestrator | Wednesday 28 January 2026 00:59:29 +0000 (0:00:02.193) 0:02:23.317 ***** 2026-01-28 00:59:35.474011 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.474052 | orchestrator | 2026-01-28 00:59:35.474059 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-01-28 00:59:35.474065 | orchestrator | Wednesday 28 January 2026 00:59:31 +0000 (0:00:02.538) 0:02:25.856 ***** 2026-01-28 00:59:35.474071 | orchestrator | changed: [testbed-node-0] 2026-01-28 00:59:35.474077 | orchestrator | 2026-01-28 00:59:35.474083 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 00:59:35.474090 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-28 00:59:35.474096 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-28 00:59:35.474108 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-01-28 00:59:35.474114 | orchestrator | 2026-01-28 00:59:35.474120 | orchestrator | 2026-01-28 00:59:35.474126 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 00:59:35.474133 | orchestrator | Wednesday 28 January 2026 00:59:34 +0000 (0:00:02.337) 0:02:28.194 ***** 2026-01-28 00:59:35.474139 | orchestrator | =============================================================================== 2026-01-28 00:59:35.474145 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 65.34s 2026-01-28 00:59:35.474151 | orchestrator | opensearch : Restart opensearch container ------------------------------ 50.62s 2026-01-28 00:59:35.474163 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.33s 2026-01-28 00:59:35.474169 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.10s 2026-01-28 00:59:35.474175 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.08s 2026-01-28 00:59:35.474182 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.57s 2026-01-28 00:59:35.474188 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.54s 2026-01-28 00:59:35.474194 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.46s 2026-01-28 00:59:35.474200 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.34s 2026-01-28 00:59:35.474206 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.22s 2026-01-28 00:59:35.474212 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.19s 2026-01-28 00:59:35.474218 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.57s 2026-01-28 00:59:35.474225 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.09s 2026-01-28 00:59:35.474231 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.00s 2026-01-28 00:59:35.474237 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.70s 2026-01-28 00:59:35.474244 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.68s 2026-01-28 00:59:35.474254 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.67s 2026-01-28 00:59:35.474261 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2026-01-28 00:59:35.474267 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.44s 2026-01-28 00:59:35.474273 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2026-01-28 00:59:35.474279 | orchestrator | 2026-01-28 00:59:35 | INFO  | Task 95f59191-cce7-427a-bec3-7807e46bb732 is in state SUCCESS 2026-01-28 00:59:35.474286 | orchestrator | 2026-01-28 00:59:35 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 00:59:35.474292 | orchestrator | 2026-01-28 00:59:35 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:38.502157 | orchestrator | 2026-01-28 00:59:38 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:38.503805 | orchestrator | 2026-01-28 00:59:38 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 00:59:38.504119 | orchestrator | 2026-01-28 00:59:38 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:41.545348 | orchestrator | 2026-01-28 00:59:41 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:41.547386 | orchestrator | 2026-01-28 00:59:41 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 00:59:41.547481 | orchestrator | 2026-01-28 00:59:41 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:44.595464 | orchestrator | 2026-01-28 00:59:44 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:44.597157 | orchestrator | 2026-01-28 00:59:44 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 00:59:44.599795 | orchestrator | 2026-01-28 00:59:44 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:47.643495 | orchestrator | 2026-01-28 00:59:47 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:47.645283 | orchestrator | 2026-01-28 00:59:47 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 00:59:47.645346 | orchestrator | 2026-01-28 00:59:47 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:50.704073 | orchestrator | 2026-01-28 00:59:50 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:50.706225 | orchestrator | 2026-01-28 00:59:50 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 00:59:50.706349 | orchestrator | 2026-01-28 00:59:50 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:53.746605 | orchestrator | 2026-01-28 00:59:53 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:53.748533 | orchestrator | 2026-01-28 00:59:53 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 00:59:53.748549 | orchestrator | 2026-01-28 00:59:53 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:56.787891 | orchestrator | 2026-01-28 00:59:56 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:56.788099 | orchestrator | 2026-01-28 00:59:56 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 00:59:56.788405 | orchestrator | 2026-01-28 00:59:56 | INFO  | Wait 1 second(s) until the next check 2026-01-28 00:59:59.838720 | orchestrator | 2026-01-28 00:59:59 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 00:59:59.839551 | orchestrator | 2026-01-28 00:59:59 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 00:59:59.839591 | orchestrator | 2026-01-28 00:59:59 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:02.879276 | orchestrator | 2026-01-28 01:00:02 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 01:00:02.881053 | orchestrator | 2026-01-28 01:00:02 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:02.881121 | orchestrator | 2026-01-28 01:00:02 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:05.930251 | orchestrator | 2026-01-28 01:00:05 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 01:00:05.931086 | orchestrator | 2026-01-28 01:00:05 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:05.931112 | orchestrator | 2026-01-28 01:00:05 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:08.971325 | orchestrator | 2026-01-28 01:00:08 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state STARTED 2026-01-28 01:00:08.973220 | orchestrator | 2026-01-28 01:00:08 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:08.973471 | orchestrator | 2026-01-28 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:12.025133 | orchestrator | 2026-01-28 01:00:12 | INFO  | Task c3e6a64c-177b-4493-a479-ae78ab09e76b is in state SUCCESS 2026-01-28 01:00:12.027173 | orchestrator | 2026-01-28 01:00:12.027232 | orchestrator | 2026-01-28 01:00:12.027253 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-01-28 01:00:12.027275 | orchestrator | 2026-01-28 01:00:12.027294 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-01-28 01:00:12.027313 | orchestrator | Wednesday 28 January 2026 00:57:06 +0000 (0:00:00.135) 0:00:00.135 ***** 2026-01-28 01:00:12.027326 | orchestrator | ok: [localhost] => { 2026-01-28 01:00:12.027339 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-01-28 01:00:12.027350 | orchestrator | } 2026-01-28 01:00:12.027362 | orchestrator | 2026-01-28 01:00:12.027373 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-01-28 01:00:12.027384 | orchestrator | Wednesday 28 January 2026 00:57:06 +0000 (0:00:00.054) 0:00:00.189 ***** 2026-01-28 01:00:12.027504 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-01-28 01:00:12.027529 | orchestrator | ...ignoring 2026-01-28 01:00:12.027549 | orchestrator | 2026-01-28 01:00:12.027568 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-01-28 01:00:12.027586 | orchestrator | Wednesday 28 January 2026 00:57:08 +0000 (0:00:02.755) 0:00:02.945 ***** 2026-01-28 01:00:12.027604 | orchestrator | skipping: [localhost] 2026-01-28 01:00:12.027625 | orchestrator | 2026-01-28 01:00:12.027644 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-01-28 01:00:12.027664 | orchestrator | Wednesday 28 January 2026 00:57:09 +0000 (0:00:00.049) 0:00:02.994 ***** 2026-01-28 01:00:12.027676 | orchestrator | ok: [localhost] 2026-01-28 01:00:12.027687 | orchestrator | 2026-01-28 01:00:12.027698 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 01:00:12.027709 | orchestrator | 2026-01-28 01:00:12.027720 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 01:00:12.027731 | orchestrator | Wednesday 28 January 2026 00:57:09 +0000 (0:00:00.139) 0:00:03.133 ***** 2026-01-28 01:00:12.027742 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:00:12.027753 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:00:12.027766 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:00:12.027779 | orchestrator | 2026-01-28 01:00:12.027792 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 01:00:12.027805 | orchestrator | Wednesday 28 January 2026 00:57:09 +0000 (0:00:00.268) 0:00:03.402 ***** 2026-01-28 01:00:12.027818 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-01-28 01:00:12.027832 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-01-28 01:00:12.027845 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-01-28 01:00:12.027933 | orchestrator | 2026-01-28 01:00:12.027946 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-01-28 01:00:12.027959 | orchestrator | 2026-01-28 01:00:12.027972 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-01-28 01:00:12.027984 | orchestrator | Wednesday 28 January 2026 00:57:09 +0000 (0:00:00.537) 0:00:03.939 ***** 2026-01-28 01:00:12.027998 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-01-28 01:00:12.028026 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-01-28 01:00:12.028039 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-01-28 01:00:12.028052 | orchestrator | 2026-01-28 01:00:12.028064 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-28 01:00:12.028077 | orchestrator | Wednesday 28 January 2026 00:57:10 +0000 (0:00:00.352) 0:00:04.292 ***** 2026-01-28 01:00:12.028090 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:00:12.028104 | orchestrator | 2026-01-28 01:00:12.028118 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-01-28 01:00:12.028129 | orchestrator | Wednesday 28 January 2026 00:57:10 +0000 (0:00:00.584) 0:00:04.877 ***** 2026-01-28 01:00:12.028166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-28 01:00:12.028202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-28 01:00:12.028216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-28 01:00:12.028235 | orchestrator | 2026-01-28 01:00:12.028253 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-01-28 01:00:12.028264 | orchestrator | Wednesday 28 January 2026 00:57:13 +0000 (0:00:02.954) 0:00:07.831 ***** 2026-01-28 01:00:12.028275 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:00:12.028286 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:00:12.028297 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:00:12.028308 | orchestrator | 2026-01-28 01:00:12.028319 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-01-28 01:00:12.028329 | orchestrator | Wednesday 28 January 2026 00:57:14 +0000 (0:00:00.743) 0:00:08.576 ***** 2026-01-28 01:00:12.028340 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:00:12.028351 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:00:12.028362 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:00:12.028372 | orchestrator | 2026-01-28 01:00:12.028383 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-01-28 01:00:12.028394 | orchestrator | Wednesday 28 January 2026 00:57:16 +0000 (0:00:01.754) 0:00:10.331 ***** 2026-01-28 01:00:12.028411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-28 01:00:12.028431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-28 01:00:12.028451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-28 01:00:12.028463 | orchestrator | 2026-01-28 01:00:12.028478 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-01-28 01:00:12.028489 | orchestrator | Wednesday 28 January 2026 00:57:20 +0000 (0:00:03.725) 0:00:14.056 ***** 2026-01-28 01:00:12.028500 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:00:12.028511 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:00:12.028522 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:00:12.028533 | orchestrator | 2026-01-28 01:00:12.028544 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-01-28 01:00:12.028554 | orchestrator | Wednesday 28 January 2026 00:57:21 +0000 (0:00:01.203) 0:00:15.260 ***** 2026-01-28 01:00:12.028565 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:00:12.028576 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:00:12.028594 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:00:12.028604 | orchestrator | 2026-01-28 01:00:12.028615 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-28 01:00:12.028626 | orchestrator | Wednesday 28 January 2026 00:57:26 +0000 (0:00:04.997) 0:00:20.258 ***** 2026-01-28 01:00:12.028637 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:00:12.028648 | orchestrator | 2026-01-28 01:00:12.028659 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-01-28 01:00:12.028670 | orchestrator | Wednesday 28 January 2026 00:57:26 +0000 (0:00:00.527) 0:00:20.785 ***** 2026-01-28 01:00:12.028701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-28 01:00:12.028729 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:00:12.028761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-28 01:00:12.028794 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:00:12.028823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-28 01:00:12.028843 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:00:12.028887 | orchestrator | 2026-01-28 01:00:12.028905 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-01-28 01:00:12.028924 | orchestrator | Wednesday 28 January 2026 00:57:30 +0000 (0:00:03.492) 0:00:24.278 ***** 2026-01-28 01:00:12.028950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-28 01:00:12.028982 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:00:12.029014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-28 01:00:12.029036 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:00:12.029073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-28 01:00:12.029104 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:00:12.029123 | orchestrator | 2026-01-28 01:00:12.029142 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-01-28 01:00:12.029160 | orchestrator | Wednesday 28 January 2026 00:57:33 +0000 (0:00:02.856) 0:00:27.135 ***** 2026-01-28 01:00:12.029189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-28 01:00:12.029208 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:00:12.029226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-28 01:00:12.029253 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:00:12.029279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-01-28 01:00:12.029298 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:00:12.029316 | orchestrator | 2026-01-28 01:00:12.029332 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-01-28 01:00:12.029351 | orchestrator | Wednesday 28 January 2026 00:57:35 +0000 (0:00:02.772) 0:00:29.907 ***** 2026-01-28 01:00:12.029671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-28 01:00:12.029731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-28 01:00:12.029771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-01-28 01:00:12.029795 | orchestrator | 2026-01-28 01:00:12.029816 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-01-28 01:00:12.029881 | orchestrator | Wednesday 28 January 2026 00:57:39 +0000 (0:00:03.272) 0:00:33.180 ***** 2026-01-28 01:00:12.029904 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:00:12.029923 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:00:12.029941 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:00:12.029960 | orchestrator | 2026-01-28 01:00:12.029978 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-01-28 01:00:12.029997 | orchestrator | Wednesday 28 January 2026 00:57:40 +0000 (0:00:00.838) 0:00:34.019 ***** 2026-01-28 01:00:12.030076 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:00:12.030104 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:00:12.030123 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:00:12.030142 | orchestrator | 2026-01-28 01:00:12.030199 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-01-28 01:00:12.030219 | orchestrator | Wednesday 28 January 2026 00:57:40 +0000 (0:00:00.557) 0:00:34.576 ***** 2026-01-28 01:00:12.030238 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:00:12.030268 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:00:12.030288 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:00:12.030307 | orchestrator | 2026-01-28 01:00:12.030327 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-01-28 01:00:12.030349 | orchestrator | Wednesday 28 January 2026 00:57:40 +0000 (0:00:00.335) 0:00:34.912 ***** 2026-01-28 01:00:12.030371 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-01-28 01:00:12.030393 | orchestrator | ...ignoring 2026-01-28 01:00:12.030413 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-01-28 01:00:12.030432 | orchestrator | ...ignoring 2026-01-28 01:00:12.030451 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-01-28 01:00:12.030468 | orchestrator | ...ignoring 2026-01-28 01:00:12.030487 | orchestrator | 2026-01-28 01:00:12.030504 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-01-28 01:00:12.030522 | orchestrator | Wednesday 28 January 2026 00:57:52 +0000 (0:00:11.072) 0:00:45.985 ***** 2026-01-28 01:00:12.030540 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:00:12.030559 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:00:12.030578 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:00:12.030598 | orchestrator | 2026-01-28 01:00:12.030618 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-01-28 01:00:12.030639 | orchestrator | Wednesday 28 January 2026 00:57:52 +0000 (0:00:00.441) 0:00:46.427 ***** 2026-01-28 01:00:12.030658 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:00:12.030677 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:00:12.030697 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:00:12.030716 | orchestrator | 2026-01-28 01:00:12.030737 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-01-28 01:00:12.030755 | orchestrator | Wednesday 28 January 2026 00:57:53 +0000 (0:00:00.665) 0:00:47.093 ***** 2026-01-28 01:00:12.030774 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:00:12.030794 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:00:12.030814 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:00:12.030835 | orchestrator | 2026-01-28 01:00:12.030882 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-01-28 01:00:12.030902 | orchestrator | Wednesday 28 January 2026 00:57:53 +0000 (0:00:00.451) 0:00:47.544 ***** 2026-01-28 01:00:12.030922 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:00:12.030942 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:00:12.030962 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:00:12.030982 | orchestrator | 2026-01-28 01:00:12.031001 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-01-28 01:00:12.031057 | orchestrator | Wednesday 28 January 2026 00:57:54 +0000 (0:00:00.480) 0:00:48.025 ***** 2026-01-28 01:00:12.031080 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:00:12.031099 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:00:12.031117 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:00:12.031136 | orchestrator | 2026-01-28 01:00:12.031153 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-01-28 01:00:12.031170 | orchestrator | Wednesday 28 January 2026 00:57:54 +0000 (0:00:00.437) 0:00:48.463 ***** 2026-01-28 01:00:12.031186 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:00:12.031201 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:00:12.031220 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:00:12.031237 | orchestrator | 2026-01-28 01:00:12.031255 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-28 01:00:12.031273 | orchestrator | Wednesday 28 January 2026 00:57:55 +0000 (0:00:00.778) 0:00:49.241 ***** 2026-01-28 01:00:12.031292 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:00:12.031310 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:00:12.031328 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-01-28 01:00:12.031345 | orchestrator | 2026-01-28 01:00:12.031362 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-01-28 01:00:12.031380 | orchestrator | Wednesday 28 January 2026 00:57:55 +0000 (0:00:00.421) 0:00:49.663 ***** 2026-01-28 01:00:12.031399 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:00:12.031417 | orchestrator | 2026-01-28 01:00:12.031436 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-01-28 01:00:12.031453 | orchestrator | Wednesday 28 January 2026 00:58:05 +0000 (0:00:09.723) 0:00:59.386 ***** 2026-01-28 01:00:12.031471 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:00:12.031490 | orchestrator | 2026-01-28 01:00:12.031509 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-01-28 01:00:12.031527 | orchestrator | Wednesday 28 January 2026 00:58:05 +0000 (0:00:00.169) 0:00:59.556 ***** 2026-01-28 01:00:12.031545 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:00:12.031564 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:00:12.031584 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:00:12.031604 | orchestrator | 2026-01-28 01:00:12.031625 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-01-28 01:00:12.031645 | orchestrator | Wednesday 28 January 2026 00:58:06 +0000 (0:00:00.994) 0:01:00.551 ***** 2026-01-28 01:00:12.031695 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:00:12.031715 | orchestrator | 2026-01-28 01:00:12.031732 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-01-28 01:00:12.031750 | orchestrator | Wednesday 28 January 2026 00:58:14 +0000 (0:00:08.036) 0:01:08.587 ***** 2026-01-28 01:00:12.031768 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2026-01-28 01:00:12.031787 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:00:12.031804 | orchestrator | 2026-01-28 01:00:12.031822 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-01-28 01:00:12.031840 | orchestrator | Wednesday 28 January 2026 00:58:21 +0000 (0:00:07.377) 0:01:15.964 ***** 2026-01-28 01:00:12.031914 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:00:12.031934 | orchestrator | 2026-01-28 01:00:12.031952 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-01-28 01:00:12.031970 | orchestrator | Wednesday 28 January 2026 00:58:25 +0000 (0:00:03.022) 0:01:18.987 ***** 2026-01-28 01:00:12.031988 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:00:12.032005 | orchestrator | 2026-01-28 01:00:12.032024 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-01-28 01:00:12.032041 | orchestrator | Wednesday 28 January 2026 00:58:25 +0000 (0:00:00.134) 0:01:19.122 ***** 2026-01-28 01:00:12.032059 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:00:12.032077 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:00:12.032113 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:00:12.032131 | orchestrator | 2026-01-28 01:00:12.032177 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-01-28 01:00:12.032195 | orchestrator | Wednesday 28 January 2026 00:58:25 +0000 (0:00:00.560) 0:01:19.682 ***** 2026-01-28 01:00:12.032213 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:00:12.032231 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-01-28 01:00:12.032249 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:00:12.032267 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:00:12.032285 | orchestrator | 2026-01-28 01:00:12.032302 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-01-28 01:00:12.032320 | orchestrator | skipping: no hosts matched 2026-01-28 01:00:12.032338 | orchestrator | 2026-01-28 01:00:12.032355 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-28 01:00:12.032374 | orchestrator | 2026-01-28 01:00:12.032392 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-28 01:00:12.032410 | orchestrator | Wednesday 28 January 2026 00:58:26 +0000 (0:00:01.030) 0:01:20.713 ***** 2026-01-28 01:00:12.032428 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:00:12.032446 | orchestrator | 2026-01-28 01:00:12.032463 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-28 01:00:12.032480 | orchestrator | Wednesday 28 January 2026 00:58:46 +0000 (0:00:20.160) 0:01:40.874 ***** 2026-01-28 01:00:12.032500 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:00:12.032518 | orchestrator | 2026-01-28 01:00:12.032538 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-28 01:00:12.032556 | orchestrator | Wednesday 28 January 2026 00:58:57 +0000 (0:00:10.509) 0:01:51.383 ***** 2026-01-28 01:00:12.032575 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:00:12.032592 | orchestrator | 2026-01-28 01:00:12.032609 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-01-28 01:00:12.032627 | orchestrator | 2026-01-28 01:00:12.032644 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-28 01:00:12.032663 | orchestrator | Wednesday 28 January 2026 00:58:59 +0000 (0:00:02.411) 0:01:53.794 ***** 2026-01-28 01:00:12.032683 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:00:12.032700 | orchestrator | 2026-01-28 01:00:12.032740 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-28 01:00:12.032758 | orchestrator | Wednesday 28 January 2026 00:59:18 +0000 (0:00:18.717) 0:02:12.512 ***** 2026-01-28 01:00:12.032775 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:00:12.032793 | orchestrator | 2026-01-28 01:00:12.032810 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-28 01:00:12.032826 | orchestrator | Wednesday 28 January 2026 00:59:34 +0000 (0:00:15.659) 0:02:28.172 ***** 2026-01-28 01:00:12.032844 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:00:12.032942 | orchestrator | 2026-01-28 01:00:12.032960 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-01-28 01:00:12.032978 | orchestrator | 2026-01-28 01:00:12.032996 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-01-28 01:00:12.033014 | orchestrator | Wednesday 28 January 2026 00:59:36 +0000 (0:00:02.680) 0:02:30.852 ***** 2026-01-28 01:00:12.033033 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:00:12.033051 | orchestrator | 2026-01-28 01:00:12.033069 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-01-28 01:00:12.033088 | orchestrator | Wednesday 28 January 2026 00:59:48 +0000 (0:00:12.043) 0:02:42.896 ***** 2026-01-28 01:00:12.033107 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:00:12.033125 | orchestrator | 2026-01-28 01:00:12.033142 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-01-28 01:00:12.033159 | orchestrator | Wednesday 28 January 2026 00:59:53 +0000 (0:00:04.707) 0:02:47.604 ***** 2026-01-28 01:00:12.033177 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:00:12.033212 | orchestrator | 2026-01-28 01:00:12.033231 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-01-28 01:00:12.033249 | orchestrator | 2026-01-28 01:00:12.033267 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-01-28 01:00:12.033284 | orchestrator | Wednesday 28 January 2026 00:59:56 +0000 (0:00:02.694) 0:02:50.298 ***** 2026-01-28 01:00:12.033300 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:00:12.033316 | orchestrator | 2026-01-28 01:00:12.033331 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-01-28 01:00:12.033347 | orchestrator | Wednesday 28 January 2026 00:59:56 +0000 (0:00:00.555) 0:02:50.854 ***** 2026-01-28 01:00:12.033363 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:00:12.033379 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:00:12.033395 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:00:12.033409 | orchestrator | 2026-01-28 01:00:12.033425 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-01-28 01:00:12.033440 | orchestrator | Wednesday 28 January 2026 00:59:59 +0000 (0:00:02.153) 0:02:53.007 ***** 2026-01-28 01:00:12.033456 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:00:12.033471 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:00:12.033487 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:00:12.033502 | orchestrator | 2026-01-28 01:00:12.033518 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-01-28 01:00:12.033534 | orchestrator | Wednesday 28 January 2026 01:00:01 +0000 (0:00:02.504) 0:02:55.512 ***** 2026-01-28 01:00:12.033549 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:00:12.033564 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:00:12.033591 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:00:12.033607 | orchestrator | 2026-01-28 01:00:12.033624 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-01-28 01:00:12.033639 | orchestrator | Wednesday 28 January 2026 01:00:04 +0000 (0:00:02.489) 0:02:58.002 ***** 2026-01-28 01:00:12.033655 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:00:12.033670 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:00:12.033685 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:00:12.033700 | orchestrator | 2026-01-28 01:00:12.033716 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-01-28 01:00:12.033732 | orchestrator | Wednesday 28 January 2026 01:00:06 +0000 (0:00:02.291) 0:03:00.293 ***** 2026-01-28 01:00:12.033747 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:00:12.033762 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:00:12.033778 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:00:12.033793 | orchestrator | 2026-01-28 01:00:12.033809 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-01-28 01:00:12.033824 | orchestrator | Wednesday 28 January 2026 01:00:09 +0000 (0:00:03.091) 0:03:03.384 ***** 2026-01-28 01:00:12.033839 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:00:12.033881 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:00:12.033898 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:00:12.033914 | orchestrator | 2026-01-28 01:00:12.033930 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:00:12.033947 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-01-28 01:00:12.033964 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-01-28 01:00:12.033981 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-28 01:00:12.033998 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-01-28 01:00:12.034124 | orchestrator | 2026-01-28 01:00:12.034151 | orchestrator | 2026-01-28 01:00:12.034191 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:00:12.034207 | orchestrator | Wednesday 28 January 2026 01:00:09 +0000 (0:00:00.235) 0:03:03.620 ***** 2026-01-28 01:00:12.034223 | orchestrator | =============================================================================== 2026-01-28 01:00:12.034239 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.88s 2026-01-28 01:00:12.034275 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 26.17s 2026-01-28 01:00:12.034291 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.04s 2026-01-28 01:00:12.034306 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.07s 2026-01-28 01:00:12.034322 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.72s 2026-01-28 01:00:12.034338 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.04s 2026-01-28 01:00:12.034355 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.38s 2026-01-28 01:00:12.034371 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.09s 2026-01-28 01:00:12.034386 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 5.00s 2026-01-28 01:00:12.034400 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.71s 2026-01-28 01:00:12.034416 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.73s 2026-01-28 01:00:12.034432 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.49s 2026-01-28 01:00:12.034447 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.27s 2026-01-28 01:00:12.034461 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.09s 2026-01-28 01:00:12.034478 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 3.02s 2026-01-28 01:00:12.034494 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.95s 2026-01-28 01:00:12.034511 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.86s 2026-01-28 01:00:12.034528 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.77s 2026-01-28 01:00:12.034544 | orchestrator | Check MariaDB service --------------------------------------------------- 2.76s 2026-01-28 01:00:12.034561 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.69s 2026-01-28 01:00:12.034600 | orchestrator | 2026-01-28 01:00:12 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:00:12.034625 | orchestrator | 2026-01-28 01:00:12 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:00:12.034642 | orchestrator | 2026-01-28 01:00:12 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:12.034658 | orchestrator | 2026-01-28 01:00:12 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:15.084506 | orchestrator | 2026-01-28 01:00:15 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:00:15.086225 | orchestrator | 2026-01-28 01:00:15 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:00:15.087672 | orchestrator | 2026-01-28 01:00:15 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:15.087700 | orchestrator | 2026-01-28 01:00:15 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:18.162448 | orchestrator | 2026-01-28 01:00:18 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:00:18.163005 | orchestrator | 2026-01-28 01:00:18 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:00:18.163682 | orchestrator | 2026-01-28 01:00:18 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:18.164012 | orchestrator | 2026-01-28 01:00:18 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:21.209803 | orchestrator | 2026-01-28 01:00:21 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:00:21.211121 | orchestrator | 2026-01-28 01:00:21 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:00:21.212678 | orchestrator | 2026-01-28 01:00:21 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:21.212691 | orchestrator | 2026-01-28 01:00:21 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:24.248051 | orchestrator | 2026-01-28 01:00:24 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:00:24.248831 | orchestrator | 2026-01-28 01:00:24 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:00:24.250360 | orchestrator | 2026-01-28 01:00:24 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:24.250686 | orchestrator | 2026-01-28 01:00:24 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:27.291420 | orchestrator | 2026-01-28 01:00:27 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:00:27.293152 | orchestrator | 2026-01-28 01:00:27 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:00:27.294587 | orchestrator | 2026-01-28 01:00:27 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:27.295927 | orchestrator | 2026-01-28 01:00:27 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:30.329932 | orchestrator | 2026-01-28 01:00:30 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:00:30.331457 | orchestrator | 2026-01-28 01:00:30 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:00:30.333224 | orchestrator | 2026-01-28 01:00:30 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:30.333428 | orchestrator | 2026-01-28 01:00:30 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:33.371099 | orchestrator | 2026-01-28 01:00:33 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:00:33.371813 | orchestrator | 2026-01-28 01:00:33 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:00:33.373275 | orchestrator | 2026-01-28 01:00:33 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:33.373449 | orchestrator | 2026-01-28 01:00:33 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:36.416547 | orchestrator | 2026-01-28 01:00:36 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:00:36.418409 | orchestrator | 2026-01-28 01:00:36 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:00:36.420432 | orchestrator | 2026-01-28 01:00:36 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:36.420489 | orchestrator | 2026-01-28 01:00:36 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:39.455003 | orchestrator | 2026-01-28 01:00:39 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:00:39.455225 | orchestrator | 2026-01-28 01:00:39 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:00:39.456721 | orchestrator | 2026-01-28 01:00:39 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:39.456792 | orchestrator | 2026-01-28 01:00:39 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:42.495347 | orchestrator | 2026-01-28 01:00:42 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:00:42.496239 | orchestrator | 2026-01-28 01:00:42 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:00:42.498182 | orchestrator | 2026-01-28 01:00:42 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:42.498219 | orchestrator | 2026-01-28 01:00:42 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:45.546269 | orchestrator | 2026-01-28 01:00:45 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:00:45.546993 | orchestrator | 2026-01-28 01:00:45 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:00:45.548258 | orchestrator | 2026-01-28 01:00:45 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:45.548323 | orchestrator | 2026-01-28 01:00:45 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:48.593178 | orchestrator | 2026-01-28 01:00:48 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:00:48.594450 | orchestrator | 2026-01-28 01:00:48 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:00:48.595170 | orchestrator | 2026-01-28 01:00:48 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:48.595203 | orchestrator | 2026-01-28 01:00:48 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:51.631691 | orchestrator | 2026-01-28 01:00:51 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:00:51.634826 | orchestrator | 2026-01-28 01:00:51 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:00:51.637878 | orchestrator | 2026-01-28 01:00:51 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:51.638077 | orchestrator | 2026-01-28 01:00:51 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:54.678981 | orchestrator | 2026-01-28 01:00:54 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:00:54.680743 | orchestrator | 2026-01-28 01:00:54 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:00:54.682546 | orchestrator | 2026-01-28 01:00:54 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:54.682603 | orchestrator | 2026-01-28 01:00:54 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:00:57.729834 | orchestrator | 2026-01-28 01:00:57 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:00:57.731571 | orchestrator | 2026-01-28 01:00:57 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:00:57.733363 | orchestrator | 2026-01-28 01:00:57 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:00:57.733621 | orchestrator | 2026-01-28 01:00:57 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:00.776779 | orchestrator | 2026-01-28 01:01:00 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:00.779529 | orchestrator | 2026-01-28 01:01:00 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:01:00.781540 | orchestrator | 2026-01-28 01:01:00 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:01:00.781589 | orchestrator | 2026-01-28 01:01:00 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:03.820354 | orchestrator | 2026-01-28 01:01:03 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:03.823349 | orchestrator | 2026-01-28 01:01:03 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:01:03.827352 | orchestrator | 2026-01-28 01:01:03 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:01:03.827572 | orchestrator | 2026-01-28 01:01:03 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:06.881942 | orchestrator | 2026-01-28 01:01:06 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:06.883972 | orchestrator | 2026-01-28 01:01:06 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:01:06.885110 | orchestrator | 2026-01-28 01:01:06 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:01:06.885128 | orchestrator | 2026-01-28 01:01:06 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:09.928140 | orchestrator | 2026-01-28 01:01:09 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:09.929547 | orchestrator | 2026-01-28 01:01:09 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:01:09.931316 | orchestrator | 2026-01-28 01:01:09 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:01:09.931364 | orchestrator | 2026-01-28 01:01:09 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:12.977585 | orchestrator | 2026-01-28 01:01:12 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:12.979442 | orchestrator | 2026-01-28 01:01:12 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:01:12.981234 | orchestrator | 2026-01-28 01:01:12 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:01:12.981437 | orchestrator | 2026-01-28 01:01:12 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:16.031425 | orchestrator | 2026-01-28 01:01:16 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:16.032046 | orchestrator | 2026-01-28 01:01:16 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:01:16.033269 | orchestrator | 2026-01-28 01:01:16 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:01:16.033344 | orchestrator | 2026-01-28 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:19.070687 | orchestrator | 2026-01-28 01:01:19 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:19.071911 | orchestrator | 2026-01-28 01:01:19 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:01:19.074306 | orchestrator | 2026-01-28 01:01:19 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:01:19.074504 | orchestrator | 2026-01-28 01:01:19 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:22.123747 | orchestrator | 2026-01-28 01:01:22 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:22.126000 | orchestrator | 2026-01-28 01:01:22 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:01:22.127803 | orchestrator | 2026-01-28 01:01:22 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:01:22.127929 | orchestrator | 2026-01-28 01:01:22 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:25.171972 | orchestrator | 2026-01-28 01:01:25 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:25.172666 | orchestrator | 2026-01-28 01:01:25 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:01:25.174169 | orchestrator | 2026-01-28 01:01:25 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:01:25.174631 | orchestrator | 2026-01-28 01:01:25 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:28.222224 | orchestrator | 2026-01-28 01:01:28 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:28.222369 | orchestrator | 2026-01-28 01:01:28 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:01:28.225219 | orchestrator | 2026-01-28 01:01:28 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:01:28.225252 | orchestrator | 2026-01-28 01:01:28 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:31.275000 | orchestrator | 2026-01-28 01:01:31 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:31.276879 | orchestrator | 2026-01-28 01:01:31 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:01:31.279148 | orchestrator | 2026-01-28 01:01:31 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:01:31.279467 | orchestrator | 2026-01-28 01:01:31 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:34.320962 | orchestrator | 2026-01-28 01:01:34 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:34.322682 | orchestrator | 2026-01-28 01:01:34 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:01:34.324467 | orchestrator | 2026-01-28 01:01:34 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:01:34.324484 | orchestrator | 2026-01-28 01:01:34 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:37.365543 | orchestrator | 2026-01-28 01:01:37 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:37.367472 | orchestrator | 2026-01-28 01:01:37 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:01:37.369115 | orchestrator | 2026-01-28 01:01:37 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:01:37.369582 | orchestrator | 2026-01-28 01:01:37 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:40.406314 | orchestrator | 2026-01-28 01:01:40 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:40.409960 | orchestrator | 2026-01-28 01:01:40 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state STARTED 2026-01-28 01:01:40.412031 | orchestrator | 2026-01-28 01:01:40 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state STARTED 2026-01-28 01:01:40.412063 | orchestrator | 2026-01-28 01:01:40 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:43.441704 | orchestrator | 2026-01-28 01:01:43 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:43.443988 | orchestrator | 2026-01-28 01:01:43 | INFO  | Task 65069ee2-3bec-454a-8324-00c61ac49ad7 is in state SUCCESS 2026-01-28 01:01:43.445724 | orchestrator | 2026-01-28 01:01:43.445765 | orchestrator | 2026-01-28 01:01:43.445777 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 01:01:43.445789 | orchestrator | 2026-01-28 01:01:43.445800 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 01:01:43.445811 | orchestrator | Wednesday 28 January 2026 01:00:14 +0000 (0:00:00.273) 0:00:00.273 ***** 2026-01-28 01:01:43.445882 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:01:43.445896 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:01:43.445907 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:01:43.445917 | orchestrator | 2026-01-28 01:01:43.445928 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 01:01:43.446059 | orchestrator | Wednesday 28 January 2026 01:00:14 +0000 (0:00:00.298) 0:00:00.572 ***** 2026-01-28 01:01:43.446076 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-01-28 01:01:43.446087 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-01-28 01:01:43.446097 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-01-28 01:01:43.446108 | orchestrator | 2026-01-28 01:01:43.446119 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-01-28 01:01:43.446129 | orchestrator | 2026-01-28 01:01:43.446140 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-28 01:01:43.446150 | orchestrator | Wednesday 28 January 2026 01:00:14 +0000 (0:00:00.435) 0:00:01.007 ***** 2026-01-28 01:01:43.446166 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:01:43.446185 | orchestrator | 2026-01-28 01:01:43.446202 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-01-28 01:01:43.446220 | orchestrator | Wednesday 28 January 2026 01:00:15 +0000 (0:00:00.560) 0:00:01.568 ***** 2026-01-28 01:01:43.446262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-28 01:01:43.446312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-28 01:01:43.446354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-28 01:01:43.446369 | orchestrator | 2026-01-28 01:01:43.446383 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-01-28 01:01:43.446402 | orchestrator | Wednesday 28 January 2026 01:00:16 +0000 (0:00:01.143) 0:00:02.712 ***** 2026-01-28 01:01:43.446416 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:01:43.446430 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:01:43.446450 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:01:43.446469 | orchestrator | 2026-01-28 01:01:43.446487 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-28 01:01:43.446506 | orchestrator | Wednesday 28 January 2026 01:00:17 +0000 (0:00:00.460) 0:00:03.173 ***** 2026-01-28 01:01:43.446533 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-28 01:01:43.446553 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-28 01:01:43.446572 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-01-28 01:01:43.446592 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-01-28 01:01:43.446611 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-01-28 01:01:43.446631 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-01-28 01:01:43.446650 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-01-28 01:01:43.446667 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-01-28 01:01:43.446680 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-28 01:01:43.446692 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-28 01:01:43.446702 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-01-28 01:01:43.446713 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-01-28 01:01:43.446724 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-01-28 01:01:43.446735 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-01-28 01:01:43.446745 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-01-28 01:01:43.446756 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-01-28 01:01:43.446766 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-01-28 01:01:43.446777 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-01-28 01:01:43.446787 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-01-28 01:01:43.446798 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-01-28 01:01:43.446808 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-01-28 01:01:43.446819 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-01-28 01:01:43.446830 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-01-28 01:01:43.446869 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-01-28 01:01:43.446893 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-01-28 01:01:43.446905 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-01-28 01:01:43.446916 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-01-28 01:01:43.446926 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-01-28 01:01:43.446947 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-01-28 01:01:43.446969 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-01-28 01:01:43.446994 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-01-28 01:01:43.447006 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-01-28 01:01:43.447017 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-01-28 01:01:43.447028 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-01-28 01:01:43.447039 | orchestrator | 2026-01-28 01:01:43.447049 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-28 01:01:43.447060 | orchestrator | Wednesday 28 January 2026 01:00:17 +0000 (0:00:00.761) 0:00:03.934 ***** 2026-01-28 01:01:43.447071 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:01:43.447081 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:01:43.447092 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:01:43.447102 | orchestrator | 2026-01-28 01:01:43.447113 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-28 01:01:43.447124 | orchestrator | Wednesday 28 January 2026 01:00:18 +0000 (0:00:00.304) 0:00:04.238 ***** 2026-01-28 01:01:43.447140 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.447152 | orchestrator | 2026-01-28 01:01:43.447163 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-28 01:01:43.447174 | orchestrator | Wednesday 28 January 2026 01:00:18 +0000 (0:00:00.125) 0:00:04.364 ***** 2026-01-28 01:01:43.447220 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.447233 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:01:43.447244 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:01:43.447255 | orchestrator | 2026-01-28 01:01:43.447265 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-28 01:01:43.447276 | orchestrator | Wednesday 28 January 2026 01:00:18 +0000 (0:00:00.449) 0:00:04.814 ***** 2026-01-28 01:01:43.447287 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:01:43.447297 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:01:43.447308 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:01:43.447319 | orchestrator | 2026-01-28 01:01:43.447329 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-28 01:01:43.447340 | orchestrator | Wednesday 28 January 2026 01:00:18 +0000 (0:00:00.293) 0:00:05.108 ***** 2026-01-28 01:01:43.447351 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.447361 | orchestrator | 2026-01-28 01:01:43.447372 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-28 01:01:43.447383 | orchestrator | Wednesday 28 January 2026 01:00:19 +0000 (0:00:00.129) 0:00:05.238 ***** 2026-01-28 01:01:43.447394 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.447404 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:01:43.447415 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:01:43.447426 | orchestrator | 2026-01-28 01:01:43.447436 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-28 01:01:43.447447 | orchestrator | Wednesday 28 January 2026 01:00:19 +0000 (0:00:00.285) 0:00:05.523 ***** 2026-01-28 01:01:43.447458 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:01:43.447468 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:01:43.447479 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:01:43.447490 | orchestrator | 2026-01-28 01:01:43.447501 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-28 01:01:43.447518 | orchestrator | Wednesday 28 January 2026 01:00:19 +0000 (0:00:00.313) 0:00:05.836 ***** 2026-01-28 01:01:43.447529 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.447540 | orchestrator | 2026-01-28 01:01:43.447550 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-28 01:01:43.447561 | orchestrator | Wednesday 28 January 2026 01:00:20 +0000 (0:00:00.329) 0:00:06.166 ***** 2026-01-28 01:01:43.447572 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.447582 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:01:43.447593 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:01:43.447604 | orchestrator | 2026-01-28 01:01:43.447614 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-28 01:01:43.447625 | orchestrator | Wednesday 28 January 2026 01:00:20 +0000 (0:00:00.326) 0:00:06.492 ***** 2026-01-28 01:01:43.447636 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:01:43.447647 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:01:43.447657 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:01:43.447668 | orchestrator | 2026-01-28 01:01:43.447679 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-28 01:01:43.447690 | orchestrator | Wednesday 28 January 2026 01:00:20 +0000 (0:00:00.370) 0:00:06.863 ***** 2026-01-28 01:01:43.447700 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.447711 | orchestrator | 2026-01-28 01:01:43.447722 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-28 01:01:43.447732 | orchestrator | Wednesday 28 January 2026 01:00:20 +0000 (0:00:00.126) 0:00:06.990 ***** 2026-01-28 01:01:43.447743 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.447754 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:01:43.447764 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:01:43.447775 | orchestrator | 2026-01-28 01:01:43.447786 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-28 01:01:43.447797 | orchestrator | Wednesday 28 January 2026 01:00:21 +0000 (0:00:00.320) 0:00:07.310 ***** 2026-01-28 01:01:43.447807 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:01:43.447818 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:01:43.447829 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:01:43.447877 | orchestrator | 2026-01-28 01:01:43.447903 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-28 01:01:43.447948 | orchestrator | Wednesday 28 January 2026 01:00:21 +0000 (0:00:00.469) 0:00:07.780 ***** 2026-01-28 01:01:43.447981 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.447999 | orchestrator | 2026-01-28 01:01:43.448017 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-28 01:01:43.448045 | orchestrator | Wednesday 28 January 2026 01:00:21 +0000 (0:00:00.125) 0:00:07.905 ***** 2026-01-28 01:01:43.448064 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.448082 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:01:43.448101 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:01:43.448112 | orchestrator | 2026-01-28 01:01:43.448122 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-28 01:01:43.448133 | orchestrator | Wednesday 28 January 2026 01:00:22 +0000 (0:00:00.296) 0:00:08.202 ***** 2026-01-28 01:01:43.448144 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:01:43.448154 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:01:43.448165 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:01:43.448175 | orchestrator | 2026-01-28 01:01:43.448186 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-28 01:01:43.448197 | orchestrator | Wednesday 28 January 2026 01:00:22 +0000 (0:00:00.318) 0:00:08.520 ***** 2026-01-28 01:01:43.448207 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.448258 | orchestrator | 2026-01-28 01:01:43.448269 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-28 01:01:43.448318 | orchestrator | Wednesday 28 January 2026 01:00:22 +0000 (0:00:00.126) 0:00:08.647 ***** 2026-01-28 01:01:43.448340 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.448351 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:01:43.448362 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:01:43.448372 | orchestrator | 2026-01-28 01:01:43.448383 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-28 01:01:43.448403 | orchestrator | Wednesday 28 January 2026 01:00:22 +0000 (0:00:00.280) 0:00:08.928 ***** 2026-01-28 01:01:43.448414 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:01:43.448425 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:01:43.448436 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:01:43.448446 | orchestrator | 2026-01-28 01:01:43.448457 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-28 01:01:43.448468 | orchestrator | Wednesday 28 January 2026 01:00:23 +0000 (0:00:00.580) 0:00:09.509 ***** 2026-01-28 01:01:43.448588 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.448599 | orchestrator | 2026-01-28 01:01:43.448610 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-28 01:01:43.448621 | orchestrator | Wednesday 28 January 2026 01:00:23 +0000 (0:00:00.133) 0:00:09.642 ***** 2026-01-28 01:01:43.448632 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.448642 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:01:43.448653 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:01:43.448664 | orchestrator | 2026-01-28 01:01:43.448674 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-28 01:01:43.448685 | orchestrator | Wednesday 28 January 2026 01:00:23 +0000 (0:00:00.313) 0:00:09.956 ***** 2026-01-28 01:01:43.448696 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:01:43.448707 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:01:43.448731 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:01:43.448742 | orchestrator | 2026-01-28 01:01:43.448753 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-28 01:01:43.448763 | orchestrator | Wednesday 28 January 2026 01:00:24 +0000 (0:00:00.321) 0:00:10.277 ***** 2026-01-28 01:01:43.448774 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.448785 | orchestrator | 2026-01-28 01:01:43.448795 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-28 01:01:43.448806 | orchestrator | Wednesday 28 January 2026 01:00:24 +0000 (0:00:00.128) 0:00:10.406 ***** 2026-01-28 01:01:43.448817 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.448827 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:01:43.448902 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:01:43.448916 | orchestrator | 2026-01-28 01:01:43.448927 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-28 01:01:43.448938 | orchestrator | Wednesday 28 January 2026 01:00:24 +0000 (0:00:00.309) 0:00:10.715 ***** 2026-01-28 01:01:43.448949 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:01:43.448959 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:01:43.448970 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:01:43.448981 | orchestrator | 2026-01-28 01:01:43.448991 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-28 01:01:43.449002 | orchestrator | Wednesday 28 January 2026 01:00:25 +0000 (0:00:00.565) 0:00:11.280 ***** 2026-01-28 01:01:43.449013 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.449023 | orchestrator | 2026-01-28 01:01:43.449034 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-28 01:01:43.449045 | orchestrator | Wednesday 28 January 2026 01:00:25 +0000 (0:00:00.130) 0:00:11.411 ***** 2026-01-28 01:01:43.449055 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.449066 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:01:43.449077 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:01:43.449087 | orchestrator | 2026-01-28 01:01:43.449098 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-01-28 01:01:43.449109 | orchestrator | Wednesday 28 January 2026 01:00:25 +0000 (0:00:00.283) 0:00:11.695 ***** 2026-01-28 01:01:43.449120 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:01:43.449139 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:01:43.449150 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:01:43.449162 | orchestrator | 2026-01-28 01:01:43.449183 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-01-28 01:01:43.449203 | orchestrator | Wednesday 28 January 2026 01:00:25 +0000 (0:00:00.300) 0:00:11.996 ***** 2026-01-28 01:01:43.449222 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.449242 | orchestrator | 2026-01-28 01:01:43.449261 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-01-28 01:01:43.449281 | orchestrator | Wednesday 28 January 2026 01:00:26 +0000 (0:00:00.140) 0:00:12.137 ***** 2026-01-28 01:01:43.449302 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.449356 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:01:43.449368 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:01:43.449378 | orchestrator | 2026-01-28 01:01:43.449389 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-01-28 01:01:43.449400 | orchestrator | Wednesday 28 January 2026 01:00:26 +0000 (0:00:00.525) 0:00:12.662 ***** 2026-01-28 01:01:43.449410 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:01:43.449421 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:01:43.449437 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:01:43.449459 | orchestrator | 2026-01-28 01:01:43.449469 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-01-28 01:01:43.449479 | orchestrator | Wednesday 28 January 2026 01:00:28 +0000 (0:00:01.610) 0:00:14.273 ***** 2026-01-28 01:01:43.449489 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-28 01:01:43.449508 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-28 01:01:43.449518 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-01-28 01:01:43.449527 | orchestrator | 2026-01-28 01:01:43.449537 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-01-28 01:01:43.449565 | orchestrator | Wednesday 28 January 2026 01:00:30 +0000 (0:00:02.120) 0:00:16.394 ***** 2026-01-28 01:01:43.449576 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-28 01:01:43.449690 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-28 01:01:43.449702 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-01-28 01:01:43.449712 | orchestrator | 2026-01-28 01:01:43.449730 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-01-28 01:01:43.449740 | orchestrator | Wednesday 28 January 2026 01:00:32 +0000 (0:00:02.369) 0:00:18.763 ***** 2026-01-28 01:01:43.449749 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-28 01:01:43.449759 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-28 01:01:43.449769 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-01-28 01:01:43.449786 | orchestrator | 2026-01-28 01:01:43.449803 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-01-28 01:01:43.449820 | orchestrator | Wednesday 28 January 2026 01:00:34 +0000 (0:00:02.010) 0:00:20.774 ***** 2026-01-28 01:01:43.449856 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.449872 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:01:43.449889 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:01:43.449906 | orchestrator | 2026-01-28 01:01:43.449918 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-01-28 01:01:43.449928 | orchestrator | Wednesday 28 January 2026 01:00:34 +0000 (0:00:00.314) 0:00:21.088 ***** 2026-01-28 01:01:43.449939 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.449956 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:01:43.449991 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:01:43.450008 | orchestrator | 2026-01-28 01:01:43.450140 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-28 01:01:43.450177 | orchestrator | Wednesday 28 January 2026 01:00:35 +0000 (0:00:00.278) 0:00:21.367 ***** 2026-01-28 01:01:43.450224 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:01:43.450283 | orchestrator | 2026-01-28 01:01:43.450301 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-01-28 01:01:43.450319 | orchestrator | Wednesday 28 January 2026 01:00:36 +0000 (0:00:00.790) 0:00:22.158 ***** 2026-01-28 01:01:43.450348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-28 01:01:43.450386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-28 01:01:43.450446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-28 01:01:43.450459 | orchestrator | 2026-01-28 01:01:43.450469 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-01-28 01:01:43.450478 | orchestrator | Wednesday 28 January 2026 01:00:37 +0000 (0:00:01.587) 0:00:23.746 ***** 2026-01-28 01:01:43.450497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-28 01:01:43.450516 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.450537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-28 01:01:43.450548 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:01:43.450558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-28 01:01:43.450575 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:01:43.450585 | orchestrator | 2026-01-28 01:01:43.450594 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-01-28 01:01:43.450604 | orchestrator | Wednesday 28 January 2026 01:00:38 +0000 (0:00:00.635) 0:00:24.381 ***** 2026-01-28 01:01:43.450634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-28 01:01:43.450651 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.450662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-28 01:01:43.450672 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:01:43.450693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-01-28 01:01:43.450710 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:01:43.450720 | orchestrator | 2026-01-28 01:01:43.450729 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-01-28 01:01:43.450739 | orchestrator | Wednesday 28 January 2026 01:00:39 +0000 (0:00:00.808) 0:00:25.189 ***** 2026-01-28 01:01:43.450755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-28 01:01:43.450773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-28 01:01:43.450795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-01-28 01:01:43.450806 | orchestrator | 2026-01-28 01:01:43.450815 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-28 01:01:43.450825 | orchestrator | Wednesday 28 January 2026 01:00:40 +0000 (0:00:01.775) 0:00:26.964 ***** 2026-01-28 01:01:43.450869 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:01:43.450880 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:01:43.450895 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:01:43.450905 | orchestrator | 2026-01-28 01:01:43.450914 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-01-28 01:01:43.450940 | orchestrator | Wednesday 28 January 2026 01:00:41 +0000 (0:00:00.337) 0:00:27.302 ***** 2026-01-28 01:01:43.450951 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:01:43.450960 | orchestrator | 2026-01-28 01:01:43.450970 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-01-28 01:01:43.450988 | orchestrator | Wednesday 28 January 2026 01:00:41 +0000 (0:00:00.523) 0:00:27.825 ***** 2026-01-28 01:01:43.450998 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:01:43.451007 | orchestrator | 2026-01-28 01:01:43.451017 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-01-28 01:01:43.451027 | orchestrator | Wednesday 28 January 2026 01:00:44 +0000 (0:00:02.531) 0:00:30.357 ***** 2026-01-28 01:01:43.451036 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:01:43.451046 | orchestrator | 2026-01-28 01:01:43.451055 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-01-28 01:01:43.451065 | orchestrator | Wednesday 28 January 2026 01:00:46 +0000 (0:00:02.708) 0:00:33.066 ***** 2026-01-28 01:01:43.451074 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:01:43.451084 | orchestrator | 2026-01-28 01:01:43.451093 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-28 01:01:43.451103 | orchestrator | Wednesday 28 January 2026 01:01:02 +0000 (0:00:15.250) 0:00:48.317 ***** 2026-01-28 01:01:43.451112 | orchestrator | 2026-01-28 01:01:43.451122 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-28 01:01:43.451131 | orchestrator | Wednesday 28 January 2026 01:01:02 +0000 (0:00:00.065) 0:00:48.382 ***** 2026-01-28 01:01:43.451141 | orchestrator | 2026-01-28 01:01:43.451150 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-01-28 01:01:43.451160 | orchestrator | Wednesday 28 January 2026 01:01:02 +0000 (0:00:00.066) 0:00:48.449 ***** 2026-01-28 01:01:43.451169 | orchestrator | 2026-01-28 01:01:43.451179 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-01-28 01:01:43.451189 | orchestrator | Wednesday 28 January 2026 01:01:02 +0000 (0:00:00.087) 0:00:48.536 ***** 2026-01-28 01:01:43.451198 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:01:43.451208 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:01:43.451217 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:01:43.451227 | orchestrator | 2026-01-28 01:01:43.451236 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:01:43.451246 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-01-28 01:01:43.451256 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-28 01:01:43.451266 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-01-28 01:01:43.451276 | orchestrator | 2026-01-28 01:01:43.451286 | orchestrator | 2026-01-28 01:01:43.451295 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:01:43.451304 | orchestrator | Wednesday 28 January 2026 01:01:40 +0000 (0:00:38.541) 0:01:27.078 ***** 2026-01-28 01:01:43.451314 | orchestrator | =============================================================================== 2026-01-28 01:01:43.451324 | orchestrator | horizon : Restart horizon container ------------------------------------ 38.54s 2026-01-28 01:01:43.451333 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.25s 2026-01-28 01:01:43.451342 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.71s 2026-01-28 01:01:43.451352 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.53s 2026-01-28 01:01:43.451371 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.37s 2026-01-28 01:01:43.451380 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.12s 2026-01-28 01:01:43.451390 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.01s 2026-01-28 01:01:43.451399 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.78s 2026-01-28 01:01:43.451409 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.61s 2026-01-28 01:01:43.451419 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.59s 2026-01-28 01:01:43.451428 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.14s 2026-01-28 01:01:43.451438 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.81s 2026-01-28 01:01:43.451447 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.79s 2026-01-28 01:01:43.451457 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2026-01-28 01:01:43.451466 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.64s 2026-01-28 01:01:43.451476 | orchestrator | horizon : Update policy file name --------------------------------------- 0.58s 2026-01-28 01:01:43.451485 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2026-01-28 01:01:43.451495 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2026-01-28 01:01:43.451504 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.53s 2026-01-28 01:01:43.451514 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-01-28 01:01:43.451523 | orchestrator | 2026-01-28 01:01:43.451532 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-28 01:01:43.451542 | orchestrator | 2.16.14 2026-01-28 01:01:43.451552 | orchestrator | 2026-01-28 01:01:43.451567 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-01-28 01:01:43.451577 | orchestrator | 2026-01-28 01:01:43.451586 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-01-28 01:01:43.451596 | orchestrator | Wednesday 28 January 2026 00:59:37 +0000 (0:00:00.575) 0:00:00.575 ***** 2026-01-28 01:01:43.451605 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 01:01:43.451615 | orchestrator | 2026-01-28 01:01:43.451625 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-01-28 01:01:43.451634 | orchestrator | Wednesday 28 January 2026 00:59:37 +0000 (0:00:00.621) 0:00:01.196 ***** 2026-01-28 01:01:43.451644 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:01:43.451654 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:01:43.451663 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:01:43.451673 | orchestrator | 2026-01-28 01:01:43.451682 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-01-28 01:01:43.451692 | orchestrator | Wednesday 28 January 2026 00:59:38 +0000 (0:00:00.655) 0:00:01.852 ***** 2026-01-28 01:01:43.451702 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:01:43.451711 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:01:43.451721 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:01:43.451730 | orchestrator | 2026-01-28 01:01:43.451740 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-01-28 01:01:43.451749 | orchestrator | Wednesday 28 January 2026 00:59:38 +0000 (0:00:00.327) 0:00:02.180 ***** 2026-01-28 01:01:43.451759 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:01:43.451793 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:01:43.451804 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:01:43.451813 | orchestrator | 2026-01-28 01:01:43.451823 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-01-28 01:01:43.451832 | orchestrator | Wednesday 28 January 2026 00:59:39 +0000 (0:00:00.842) 0:00:03.022 ***** 2026-01-28 01:01:43.451867 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:01:43.451877 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:01:43.451887 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:01:43.451896 | orchestrator | 2026-01-28 01:01:43.451905 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-01-28 01:01:43.451915 | orchestrator | Wednesday 28 January 2026 00:59:40 +0000 (0:00:00.345) 0:00:03.368 ***** 2026-01-28 01:01:43.451924 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:01:43.451934 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:01:43.451943 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:01:43.451952 | orchestrator | 2026-01-28 01:01:43.451962 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-01-28 01:01:43.451971 | orchestrator | Wednesday 28 January 2026 00:59:40 +0000 (0:00:00.330) 0:00:03.698 ***** 2026-01-28 01:01:43.451980 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:01:43.451990 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:01:43.451999 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:01:43.452008 | orchestrator | 2026-01-28 01:01:43.452018 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-01-28 01:01:43.452027 | orchestrator | Wednesday 28 January 2026 00:59:40 +0000 (0:00:00.338) 0:00:04.036 ***** 2026-01-28 01:01:43.452037 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.452046 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.452056 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.452065 | orchestrator | 2026-01-28 01:01:43.452074 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-01-28 01:01:43.452084 | orchestrator | Wednesday 28 January 2026 00:59:41 +0000 (0:00:00.479) 0:00:04.516 ***** 2026-01-28 01:01:43.452093 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:01:43.452103 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:01:43.452112 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:01:43.452121 | orchestrator | 2026-01-28 01:01:43.452131 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-01-28 01:01:43.452140 | orchestrator | Wednesday 28 January 2026 00:59:41 +0000 (0:00:00.297) 0:00:04.813 ***** 2026-01-28 01:01:43.452150 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-28 01:01:43.452159 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-28 01:01:43.452169 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-28 01:01:43.452178 | orchestrator | 2026-01-28 01:01:43.452187 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-01-28 01:01:43.452197 | orchestrator | Wednesday 28 January 2026 00:59:42 +0000 (0:00:00.639) 0:00:05.452 ***** 2026-01-28 01:01:43.452206 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:01:43.452216 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:01:43.452225 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:01:43.452235 | orchestrator | 2026-01-28 01:01:43.452248 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-01-28 01:01:43.452258 | orchestrator | Wednesday 28 January 2026 00:59:42 +0000 (0:00:00.442) 0:00:05.895 ***** 2026-01-28 01:01:43.452267 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-28 01:01:43.452277 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-28 01:01:43.452286 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-28 01:01:43.452295 | orchestrator | 2026-01-28 01:01:43.452305 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-01-28 01:01:43.452314 | orchestrator | Wednesday 28 January 2026 00:59:44 +0000 (0:00:01.970) 0:00:07.865 ***** 2026-01-28 01:01:43.452324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-28 01:01:43.452334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-28 01:01:43.452343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-28 01:01:43.452359 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.452369 | orchestrator | 2026-01-28 01:01:43.452378 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-01-28 01:01:43.452388 | orchestrator | Wednesday 28 January 2026 00:59:45 +0000 (0:00:00.686) 0:00:08.552 ***** 2026-01-28 01:01:43.452403 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.452415 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.452425 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.452435 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.452445 | orchestrator | 2026-01-28 01:01:43.452455 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-01-28 01:01:43.452464 | orchestrator | Wednesday 28 January 2026 00:59:46 +0000 (0:00:00.866) 0:00:09.418 ***** 2026-01-28 01:01:43.452475 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.452487 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.452497 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.452507 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.452516 | orchestrator | 2026-01-28 01:01:43.452526 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-01-28 01:01:43.452535 | orchestrator | Wednesday 28 January 2026 00:59:46 +0000 (0:00:00.348) 0:00:09.766 ***** 2026-01-28 01:01:43.452549 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '96c65490294f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-01-28 00:59:43.223888', 'end': '2026-01-28 00:59:43.252200', 'delta': '0:00:00.028312', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['96c65490294f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-01-28 01:01:43.452560 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd34375f0b62d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-01-28 00:59:43.906020', 'end': '2026-01-28 00:59:43.931596', 'delta': '0:00:00.025576', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d34375f0b62d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-01-28 01:01:43.452582 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9ac40896144b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-01-28 00:59:44.378648', 'end': '2026-01-28 00:59:44.410053', 'delta': '0:00:00.031405', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9ac40896144b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-01-28 01:01:43.452593 | orchestrator | 2026-01-28 01:01:43.452603 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-01-28 01:01:43.452612 | orchestrator | Wednesday 28 January 2026 00:59:46 +0000 (0:00:00.196) 0:00:09.962 ***** 2026-01-28 01:01:43.452621 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:01:43.452631 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:01:43.452640 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:01:43.452650 | orchestrator | 2026-01-28 01:01:43.452659 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-01-28 01:01:43.452669 | orchestrator | Wednesday 28 January 2026 00:59:47 +0000 (0:00:00.441) 0:00:10.404 ***** 2026-01-28 01:01:43.452678 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-01-28 01:01:43.452688 | orchestrator | 2026-01-28 01:01:43.452697 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-01-28 01:01:43.452707 | orchestrator | Wednesday 28 January 2026 00:59:48 +0000 (0:00:01.773) 0:00:12.177 ***** 2026-01-28 01:01:43.452716 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.452726 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.452735 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.452745 | orchestrator | 2026-01-28 01:01:43.452754 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-01-28 01:01:43.452764 | orchestrator | Wednesday 28 January 2026 00:59:49 +0000 (0:00:00.324) 0:00:12.502 ***** 2026-01-28 01:01:43.452773 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.452783 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.452792 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.452802 | orchestrator | 2026-01-28 01:01:43.452811 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-28 01:01:43.452820 | orchestrator | Wednesday 28 January 2026 00:59:49 +0000 (0:00:00.411) 0:00:12.914 ***** 2026-01-28 01:01:43.452830 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.452869 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.452881 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.452891 | orchestrator | 2026-01-28 01:01:43.452900 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-01-28 01:01:43.452910 | orchestrator | Wednesday 28 January 2026 00:59:50 +0000 (0:00:00.495) 0:00:13.409 ***** 2026-01-28 01:01:43.452919 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:01:43.452929 | orchestrator | 2026-01-28 01:01:43.452938 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-01-28 01:01:43.452948 | orchestrator | Wednesday 28 January 2026 00:59:50 +0000 (0:00:00.140) 0:00:13.550 ***** 2026-01-28 01:01:43.452963 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.452972 | orchestrator | 2026-01-28 01:01:43.452982 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-01-28 01:01:43.452991 | orchestrator | Wednesday 28 January 2026 00:59:50 +0000 (0:00:00.237) 0:00:13.787 ***** 2026-01-28 01:01:43.453001 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.453010 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.453020 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.453029 | orchestrator | 2026-01-28 01:01:43.453038 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-01-28 01:01:43.453048 | orchestrator | Wednesday 28 January 2026 00:59:50 +0000 (0:00:00.276) 0:00:14.064 ***** 2026-01-28 01:01:43.453057 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.453067 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.453076 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.453086 | orchestrator | 2026-01-28 01:01:43.453095 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-01-28 01:01:43.453105 | orchestrator | Wednesday 28 January 2026 00:59:51 +0000 (0:00:00.309) 0:00:14.373 ***** 2026-01-28 01:01:43.453114 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.453124 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.453133 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.453142 | orchestrator | 2026-01-28 01:01:43.453156 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-01-28 01:01:43.453165 | orchestrator | Wednesday 28 January 2026 00:59:51 +0000 (0:00:00.495) 0:00:14.869 ***** 2026-01-28 01:01:43.453175 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.453184 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.453194 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.453203 | orchestrator | 2026-01-28 01:01:43.453213 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-01-28 01:01:43.453222 | orchestrator | Wednesday 28 January 2026 00:59:51 +0000 (0:00:00.316) 0:00:15.185 ***** 2026-01-28 01:01:43.453232 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.453241 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.453250 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.453260 | orchestrator | 2026-01-28 01:01:43.453269 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-01-28 01:01:43.453279 | orchestrator | Wednesday 28 January 2026 00:59:52 +0000 (0:00:00.307) 0:00:15.492 ***** 2026-01-28 01:01:43.453288 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.453298 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.453307 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.453317 | orchestrator | 2026-01-28 01:01:43.453326 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-01-28 01:01:43.453336 | orchestrator | Wednesday 28 January 2026 00:59:52 +0000 (0:00:00.307) 0:00:15.800 ***** 2026-01-28 01:01:43.453351 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.453360 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.453370 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.453379 | orchestrator | 2026-01-28 01:01:43.453389 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-01-28 01:01:43.453399 | orchestrator | Wednesday 28 January 2026 00:59:52 +0000 (0:00:00.483) 0:00:16.284 ***** 2026-01-28 01:01:43.453410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--12f0ff1a--fab7--5a0a--bd83--09da1ae004fe-osd--block--12f0ff1a--fab7--5a0a--bd83--09da1ae004fe', 'dm-uuid-LVM-BuuylK42M4sAxBlhnDIIurvZHyCeVCsgXTItj8X84JRWTcMCSsGIbJh2LmIJreU4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cf0ea652--88a6--5aa8--929a--ed9131fd0cef-osd--block--cf0ea652--88a6--5aa8--929a--ed9131fd0cef', 'dm-uuid-LVM-EdsurwuGKZufF9XVsDJukuhsKhfu1ggWUyScsX0MF9OOWySHTJp1xCZzqLTe1NJD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part1', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part14', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part15', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part16', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 01:01:43.453551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--12f0ff1a--fab7--5a0a--bd83--09da1ae004fe-osd--block--12f0ff1a--fab7--5a0a--bd83--09da1ae004fe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4bPgF7-Vfr9-RZz2-tbWr-gSfa-6KPe-2MWuwN', 'scsi-0QEMU_QEMU_HARDDISK_1ddb64ab-bdcb-47d2-8e6e-2a5ea470c250', 'scsi-SQEMU_QEMU_HARDDISK_1ddb64ab-bdcb-47d2-8e6e-2a5ea470c250'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 01:01:43.453568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e01643e5--7b60--5b49--bc8a--cfec0728964e-osd--block--e01643e5--7b60--5b49--bc8a--cfec0728964e', 'dm-uuid-LVM-NbzBTxqeS0v8OLHU0diczabMjdpA9hEuwC1CGwcm3OqXNzdGc6gIHp2bseol3nfc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--cf0ea652--88a6--5aa8--929a--ed9131fd0cef-osd--block--cf0ea652--88a6--5aa8--929a--ed9131fd0cef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YK1El4-y6r6-KkY1-0cH0-prT4-ZF4x-ZXUFUA', 'scsi-0QEMU_QEMU_HARDDISK_b987a0a7-5a55-41a6-ab39-84821076e11d', 'scsi-SQEMU_QEMU_HARDDISK_b987a0a7-5a55-41a6-ab39-84821076e11d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 01:01:43.453596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ae2f77e7--beca--5176--aee2--b01d14f9def4-osd--block--ae2f77e7--beca--5176--aee2--b01d14f9def4', 'dm-uuid-LVM-kbDRFqdPNw52PykaapAiUHvnSFqt9fS0lVLSJThpjo8x8a1YfaF2PG22wa3khepJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac1a11ac-4fa1-43c2-9cb0-18dea5100f59', 'scsi-SQEMU_QEMU_HARDDISK_ac1a11ac-4fa1-43c2-9cb0-18dea5100f59'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 01:01:43.453617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-28-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 01:01:43.453641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453651 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453684 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453694 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453714 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part1', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part14', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part15', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part16', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 01:01:43.453744 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.453759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--e01643e5--7b60--5b49--bc8a--cfec0728964e-osd--block--e01643e5--7b60--5b49--bc8a--cfec0728964e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cU7pY2-cQuF-A1YO-e6Ud-t9dX-bbsF-IBAbek', 'scsi-0QEMU_QEMU_HARDDISK_73ccfbcc-79e9-4762-9da0-bda867b64772', 'scsi-SQEMU_QEMU_HARDDISK_73ccfbcc-79e9-4762-9da0-bda867b64772'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 01:01:43.453770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ae2f77e7--beca--5176--aee2--b01d14f9def4-osd--block--ae2f77e7--beca--5176--aee2--b01d14f9def4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yhnmRE-3dTL-6agf-Zw3c-v8PG-xEkE-5lXaUy', 'scsi-0QEMU_QEMU_HARDDISK_eefe2a4b-8f8c-4873-9530-ac9327ae5f1f', 'scsi-SQEMU_QEMU_HARDDISK_eefe2a4b-8f8c-4873-9530-ac9327ae5f1f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 01:01:43.453780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f01b7d80-edbf-4a4b-9318-1b6b20cc249d', 'scsi-SQEMU_QEMU_HARDDISK_f01b7d80-edbf-4a4b-9318-1b6b20cc249d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 01:01:43.453790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-28-00-03-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 01:01:43.453800 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.453813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--60e20e1d--9b2b--5d4f--86ba--deb7f624d16e-osd--block--60e20e1d--9b2b--5d4f--86ba--deb7f624d16e', 'dm-uuid-LVM-0qDjjo5Cy36D4QNVUdVbEU60mRT1YMFZpWhKgewiIWD9xe7cCOUyxT7KnqUR0TaA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6a7f1cd8--9d71--5746--99fd--f6abb350b2d6-osd--block--6a7f1cd8--9d71--5746--99fd--f6abb350b2d6', 'dm-uuid-LVM-wMIkpdbksNS8xVnUKnZmuEyVN1ecWDtjKTwyJcc49GTQefJ93Aa8WyZajAMuD9iN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453904 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453937 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-01-28 01:01:43.453956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part1', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part14', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part15', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part16', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 01:01:43.453973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--60e20e1d--9b2b--5d4f--86ba--deb7f624d16e-osd--block--60e20e1d--9b2b--5d4f--86ba--deb7f624d16e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-t0OFdX-49rb-dGJy-sNiA-CXRc-i2Mk-NfstfW', 'scsi-0QEMU_QEMU_HARDDISK_3b42e8f1-b37c-4f60-8295-3641607d148d', 'scsi-SQEMU_QEMU_HARDDISK_3b42e8f1-b37c-4f60-8295-3641607d148d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 01:01:43.453983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6a7f1cd8--9d71--5746--99fd--f6abb350b2d6-osd--block--6a7f1cd8--9d71--5746--99fd--f6abb350b2d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hscqIg-8ApU-btZD-n3Qv-YKoO-fBEH-5Udamz', 'scsi-0QEMU_QEMU_HARDDISK_aa542874-8b0a-406e-9706-56af76962c37', 'scsi-SQEMU_QEMU_HARDDISK_aa542874-8b0a-406e-9706-56af76962c37'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 01:01:43.453997 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28bc48d4-f1f3-45fc-825f-eba8771d5ae9', 'scsi-SQEMU_QEMU_HARDDISK_28bc48d4-f1f3-45fc-825f-eba8771d5ae9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 01:01:43.454012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-28-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-01-28 01:01:43.454071 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.454081 | orchestrator | 2026-01-28 01:01:43.454091 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-01-28 01:01:43.454101 | orchestrator | Wednesday 28 January 2026 00:59:53 +0000 (0:00:00.588) 0:00:16.872 ***** 2026-01-28 01:01:43.454111 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--12f0ff1a--fab7--5a0a--bd83--09da1ae004fe-osd--block--12f0ff1a--fab7--5a0a--bd83--09da1ae004fe', 'dm-uuid-LVM-BuuylK42M4sAxBlhnDIIurvZHyCeVCsgXTItj8X84JRWTcMCSsGIbJh2LmIJreU4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454122 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cf0ea652--88a6--5aa8--929a--ed9131fd0cef-osd--block--cf0ea652--88a6--5aa8--929a--ed9131fd0cef', 'dm-uuid-LVM-EdsurwuGKZufF9XVsDJukuhsKhfu1ggWUyScsX0MF9OOWySHTJp1xCZzqLTe1NJD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454132 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454142 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454156 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454177 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454188 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454197 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454207 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454217 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454231 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e01643e5--7b60--5b49--bc8a--cfec0728964e-osd--block--e01643e5--7b60--5b49--bc8a--cfec0728964e', 'dm-uuid-LVM-NbzBTxqeS0v8OLHU0diczabMjdpA9hEuwC1CGwcm3OqXNzdGc6gIHp2bseol3nfc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454264 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part1', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part14', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part15', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part16', 'scsi-SQEMU_QEMU_HARDDISK_d9f799c7-1a34-4b7c-88c1-e9cf002fdca2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454277 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ae2f77e7--beca--5176--aee2--b01d14f9def4-osd--block--ae2f77e7--beca--5176--aee2--b01d14f9def4', 'dm-uuid-LVM-kbDRFqdPNw52PykaapAiUHvnSFqt9fS0lVLSJThpjo8x8a1YfaF2PG22wa3khepJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454291 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--12f0ff1a--fab7--5a0a--bd83--09da1ae004fe-osd--block--12f0ff1a--fab7--5a0a--bd83--09da1ae004fe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4bPgF7-Vfr9-RZz2-tbWr-gSfa-6KPe-2MWuwN', 'scsi-0QEMU_QEMU_HARDDISK_1ddb64ab-bdcb-47d2-8e6e-2a5ea470c250', 'scsi-SQEMU_QEMU_HARDDISK_1ddb64ab-bdcb-47d2-8e6e-2a5ea470c250'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454312 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--cf0ea652--88a6--5aa8--929a--ed9131fd0cef-osd--block--cf0ea652--88a6--5aa8--929a--ed9131fd0cef'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YK1El4-y6r6-KkY1-0cH0-prT4-ZF4x-ZXUFUA', 'scsi-0QEMU_QEMU_HARDDISK_b987a0a7-5a55-41a6-ab39-84821076e11d', 'scsi-SQEMU_QEMU_HARDDISK_b987a0a7-5a55-41a6-ab39-84821076e11d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454322 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454337 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac1a11ac-4fa1-43c2-9cb0-18dea5100f59', 'scsi-SQEMU_QEMU_HARDDISK_ac1a11ac-4fa1-43c2-9cb0-18dea5100f59'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454350 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454360 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-28-00-03-11-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454382 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454392 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.454406 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454417 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454427 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454437 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454447 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454460 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--60e20e1d--9b2b--5d4f--86ba--deb7f624d16e-osd--block--60e20e1d--9b2b--5d4f--86ba--deb7f624d16e', 'dm-uuid-LVM-0qDjjo5Cy36D4QNVUdVbEU60mRT1YMFZpWhKgewiIWD9xe7cCOUyxT7KnqUR0TaA'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454482 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part1', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part14', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part15', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part16', 'scsi-SQEMU_QEMU_HARDDISK_6a73c03e-ec93-4e83-874f-d58572852c6e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454493 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6a7f1cd8--9d71--5746--99fd--f6abb350b2d6-osd--block--6a7f1cd8--9d71--5746--99fd--f6abb350b2d6', 'dm-uuid-LVM-wMIkpdbksNS8xVnUKnZmuEyVN1ecWDtjKTwyJcc49GTQefJ93Aa8WyZajAMuD9iN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454507 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--e01643e5--7b60--5b49--bc8a--cfec0728964e-osd--block--e01643e5--7b60--5b49--bc8a--cfec0728964e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cU7pY2-cQuF-A1YO-e6Ud-t9dX-bbsF-IBAbek', 'scsi-0QEMU_QEMU_HARDDISK_73ccfbcc-79e9-4762-9da0-bda867b64772', 'scsi-SQEMU_QEMU_HARDDISK_73ccfbcc-79e9-4762-9da0-bda867b64772'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454523 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454538 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454548 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ae2f77e7--beca--5176--aee2--b01d14f9def4-osd--block--ae2f77e7--beca--5176--aee2--b01d14f9def4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yhnmRE-3dTL-6agf-Zw3c-v8PG-xEkE-5lXaUy', 'scsi-0QEMU_QEMU_HARDDISK_eefe2a4b-8f8c-4873-9530-ac9327ae5f1f', 'scsi-SQEMU_QEMU_HARDDISK_eefe2a4b-8f8c-4873-9530-ac9327ae5f1f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454558 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454568 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f01b7d80-edbf-4a4b-9318-1b6b20cc249d', 'scsi-SQEMU_QEMU_HARDDISK_f01b7d80-edbf-4a4b-9318-1b6b20cc249d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454587 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-28-00-03-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454603 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454613 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.454624 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454634 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454644 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454654 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454681 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part1', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part14', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part15', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part16', 'scsi-SQEMU_QEMU_HARDDISK_803306ce-e622-41b7-ac52-96a9edfbbdc2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454693 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--60e20e1d--9b2b--5d4f--86ba--deb7f624d16e-osd--block--60e20e1d--9b2b--5d4f--86ba--deb7f624d16e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-t0OFdX-49rb-dGJy-sNiA-CXRc-i2Mk-NfstfW', 'scsi-0QEMU_QEMU_HARDDISK_3b42e8f1-b37c-4f60-8295-3641607d148d', 'scsi-SQEMU_QEMU_HARDDISK_3b42e8f1-b37c-4f60-8295-3641607d148d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454703 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6a7f1cd8--9d71--5746--99fd--f6abb350b2d6-osd--block--6a7f1cd8--9d71--5746--99fd--f6abb350b2d6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hscqIg-8ApU-btZD-n3Qv-YKoO-fBEH-5Udamz', 'scsi-0QEMU_QEMU_HARDDISK_aa542874-8b0a-406e-9706-56af76962c37', 'scsi-SQEMU_QEMU_HARDDISK_aa542874-8b0a-406e-9706-56af76962c37'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454722 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28bc48d4-f1f3-45fc-825f-eba8771d5ae9', 'scsi-SQEMU_QEMU_HARDDISK_28bc48d4-f1f3-45fc-825f-eba8771d5ae9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454737 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-01-28-00-03-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-01-28 01:01:43.454747 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.454757 | orchestrator | 2026-01-28 01:01:43.454767 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-01-28 01:01:43.454776 | orchestrator | Wednesday 28 January 2026 00:59:54 +0000 (0:00:00.589) 0:00:17.462 ***** 2026-01-28 01:01:43.454786 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:01:43.454796 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:01:43.454805 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:01:43.454815 | orchestrator | 2026-01-28 01:01:43.454824 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-01-28 01:01:43.454834 | orchestrator | Wednesday 28 January 2026 00:59:54 +0000 (0:00:00.695) 0:00:18.157 ***** 2026-01-28 01:01:43.454868 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:01:43.454886 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:01:43.454903 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:01:43.454918 | orchestrator | 2026-01-28 01:01:43.454930 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-28 01:01:43.454940 | orchestrator | Wednesday 28 January 2026 00:59:55 +0000 (0:00:00.505) 0:00:18.662 ***** 2026-01-28 01:01:43.454949 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:01:43.454958 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:01:43.454968 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:01:43.454977 | orchestrator | 2026-01-28 01:01:43.454987 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-28 01:01:43.454997 | orchestrator | Wednesday 28 January 2026 00:59:55 +0000 (0:00:00.639) 0:00:19.301 ***** 2026-01-28 01:01:43.455006 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.455016 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.455026 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.455035 | orchestrator | 2026-01-28 01:01:43.455045 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-01-28 01:01:43.455061 | orchestrator | Wednesday 28 January 2026 00:59:56 +0000 (0:00:00.313) 0:00:19.615 ***** 2026-01-28 01:01:43.455070 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.455080 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.455089 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.455099 | orchestrator | 2026-01-28 01:01:43.455108 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-01-28 01:01:43.455118 | orchestrator | Wednesday 28 January 2026 00:59:56 +0000 (0:00:00.415) 0:00:20.030 ***** 2026-01-28 01:01:43.455128 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.455137 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.455146 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.455156 | orchestrator | 2026-01-28 01:01:43.455165 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-01-28 01:01:43.455175 | orchestrator | Wednesday 28 January 2026 00:59:57 +0000 (0:00:00.511) 0:00:20.542 ***** 2026-01-28 01:01:43.455184 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-01-28 01:01:43.455194 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-01-28 01:01:43.455204 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-01-28 01:01:43.455213 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-01-28 01:01:43.455223 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-01-28 01:01:43.455232 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-01-28 01:01:43.455241 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-01-28 01:01:43.455251 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-01-28 01:01:43.455261 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-01-28 01:01:43.455270 | orchestrator | 2026-01-28 01:01:43.455280 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-01-28 01:01:43.455289 | orchestrator | Wednesday 28 January 2026 00:59:58 +0000 (0:00:00.832) 0:00:21.374 ***** 2026-01-28 01:01:43.455299 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-01-28 01:01:43.455308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-01-28 01:01:43.455318 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-01-28 01:01:43.455327 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.455337 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-01-28 01:01:43.455346 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-01-28 01:01:43.455360 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-01-28 01:01:43.455370 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.455379 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-01-28 01:01:43.455388 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-01-28 01:01:43.455398 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-01-28 01:01:43.455407 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.455417 | orchestrator | 2026-01-28 01:01:43.455426 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-01-28 01:01:43.455436 | orchestrator | Wednesday 28 January 2026 00:59:58 +0000 (0:00:00.386) 0:00:21.761 ***** 2026-01-28 01:01:43.455446 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 01:01:43.455455 | orchestrator | 2026-01-28 01:01:43.455465 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-01-28 01:01:43.455475 | orchestrator | Wednesday 28 January 2026 00:59:59 +0000 (0:00:00.670) 0:00:22.431 ***** 2026-01-28 01:01:43.455484 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.455494 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.455503 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.455513 | orchestrator | 2026-01-28 01:01:43.455522 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-01-28 01:01:43.455556 | orchestrator | Wednesday 28 January 2026 00:59:59 +0000 (0:00:00.324) 0:00:22.756 ***** 2026-01-28 01:01:43.455567 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.455576 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.455586 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.455595 | orchestrator | 2026-01-28 01:01:43.455605 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-01-28 01:01:43.455615 | orchestrator | Wednesday 28 January 2026 00:59:59 +0000 (0:00:00.298) 0:00:23.054 ***** 2026-01-28 01:01:43.455624 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.455634 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.455643 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:01:43.455653 | orchestrator | 2026-01-28 01:01:43.455662 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-01-28 01:01:43.455672 | orchestrator | Wednesday 28 January 2026 01:00:00 +0000 (0:00:00.298) 0:00:23.352 ***** 2026-01-28 01:01:43.455681 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:01:43.455691 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:01:43.455700 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:01:43.455710 | orchestrator | 2026-01-28 01:01:43.455720 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-01-28 01:01:43.455730 | orchestrator | Wednesday 28 January 2026 01:00:00 +0000 (0:00:00.928) 0:00:24.281 ***** 2026-01-28 01:01:43.455739 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-28 01:01:43.455749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-28 01:01:43.455758 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-28 01:01:43.455768 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.455777 | orchestrator | 2026-01-28 01:01:43.455787 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-01-28 01:01:43.455796 | orchestrator | Wednesday 28 January 2026 01:00:01 +0000 (0:00:00.399) 0:00:24.680 ***** 2026-01-28 01:01:43.455806 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-28 01:01:43.455815 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-28 01:01:43.455825 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-28 01:01:43.455835 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.455893 | orchestrator | 2026-01-28 01:01:43.455904 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-01-28 01:01:43.455913 | orchestrator | Wednesday 28 January 2026 01:00:01 +0000 (0:00:00.378) 0:00:25.059 ***** 2026-01-28 01:01:43.455923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-01-28 01:01:43.455932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-01-28 01:01:43.455942 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-01-28 01:01:43.455951 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.455961 | orchestrator | 2026-01-28 01:01:43.455970 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-01-28 01:01:43.455980 | orchestrator | Wednesday 28 January 2026 01:00:02 +0000 (0:00:00.360) 0:00:25.420 ***** 2026-01-28 01:01:43.455990 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:01:43.455999 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:01:43.456009 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:01:43.456018 | orchestrator | 2026-01-28 01:01:43.456027 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-01-28 01:01:43.456037 | orchestrator | Wednesday 28 January 2026 01:00:02 +0000 (0:00:00.335) 0:00:25.755 ***** 2026-01-28 01:01:43.456046 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-01-28 01:01:43.456056 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-01-28 01:01:43.456065 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-01-28 01:01:43.456075 | orchestrator | 2026-01-28 01:01:43.456084 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-01-28 01:01:43.456100 | orchestrator | Wednesday 28 January 2026 01:00:02 +0000 (0:00:00.490) 0:00:26.246 ***** 2026-01-28 01:01:43.456108 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-28 01:01:43.456116 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-28 01:01:43.456123 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-28 01:01:43.456131 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-28 01:01:43.456139 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-28 01:01:43.456151 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-28 01:01:43.456159 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-28 01:01:43.456167 | orchestrator | 2026-01-28 01:01:43.456175 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-01-28 01:01:43.456183 | orchestrator | Wednesday 28 January 2026 01:00:03 +0000 (0:00:00.994) 0:00:27.241 ***** 2026-01-28 01:01:43.456190 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-01-28 01:01:43.456198 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-01-28 01:01:43.456206 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-01-28 01:01:43.456214 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-01-28 01:01:43.456222 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-01-28 01:01:43.456229 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-01-28 01:01:43.456237 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-01-28 01:01:43.456245 | orchestrator | 2026-01-28 01:01:43.456253 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-01-28 01:01:43.456266 | orchestrator | Wednesday 28 January 2026 01:00:05 +0000 (0:00:02.000) 0:00:29.241 ***** 2026-01-28 01:01:43.456274 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:01:43.456281 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:01:43.456289 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-01-28 01:01:43.456297 | orchestrator | 2026-01-28 01:01:43.456305 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-01-28 01:01:43.456313 | orchestrator | Wednesday 28 January 2026 01:00:06 +0000 (0:00:00.398) 0:00:29.639 ***** 2026-01-28 01:01:43.456321 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-28 01:01:43.456331 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-28 01:01:43.456339 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-28 01:01:43.456348 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-28 01:01:43.456356 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-01-28 01:01:43.456368 | orchestrator | 2026-01-28 01:01:43.456376 | orchestrator | TASK [generate keys] *********************************************************** 2026-01-28 01:01:43.456384 | orchestrator | Wednesday 28 January 2026 01:00:50 +0000 (0:00:44.277) 0:01:13.917 ***** 2026-01-28 01:01:43.456392 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 01:01:43.456400 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 01:01:43.456408 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 01:01:43.456416 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 01:01:43.456424 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 01:01:43.456431 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 01:01:43.456439 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-01-28 01:01:43.456447 | orchestrator | 2026-01-28 01:01:43.456455 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-01-28 01:01:43.456463 | orchestrator | Wednesday 28 January 2026 01:01:13 +0000 (0:00:23.366) 0:01:37.284 ***** 2026-01-28 01:01:43.456470 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 01:01:43.456478 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 01:01:43.456486 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 01:01:43.456494 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 01:01:43.456505 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 01:01:43.456513 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 01:01:43.456521 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-01-28 01:01:43.456529 | orchestrator | 2026-01-28 01:01:43.456536 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-01-28 01:01:43.456544 | orchestrator | Wednesday 28 January 2026 01:01:25 +0000 (0:00:12.008) 0:01:49.292 ***** 2026-01-28 01:01:43.456552 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 01:01:43.456560 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-28 01:01:43.456568 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-28 01:01:43.456576 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 01:01:43.456583 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-28 01:01:43.456591 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-28 01:01:43.456599 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 01:01:43.456612 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-28 01:01:43.456620 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-28 01:01:43.456628 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 01:01:43.456635 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-28 01:01:43.456643 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-28 01:01:43.456651 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 01:01:43.456658 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-28 01:01:43.456671 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-28 01:01:43.456679 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-01-28 01:01:43.456687 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-01-28 01:01:43.456694 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-01-28 01:01:43.456702 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-01-28 01:01:43.456710 | orchestrator | 2026-01-28 01:01:43.456717 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:01:43.456725 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-01-28 01:01:43.456734 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-01-28 01:01:43.456742 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-28 01:01:43.456749 | orchestrator | 2026-01-28 01:01:43.456757 | orchestrator | 2026-01-28 01:01:43.456765 | orchestrator | 2026-01-28 01:01:43.456779 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:01:43.456796 | orchestrator | Wednesday 28 January 2026 01:01:42 +0000 (0:00:16.041) 0:02:05.334 ***** 2026-01-28 01:01:43.456816 | orchestrator | =============================================================================== 2026-01-28 01:01:43.456829 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.28s 2026-01-28 01:01:43.456862 | orchestrator | generate keys ---------------------------------------------------------- 23.37s 2026-01-28 01:01:43.456874 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.04s 2026-01-28 01:01:43.456887 | orchestrator | get keys from monitors ------------------------------------------------- 12.01s 2026-01-28 01:01:43.456899 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.00s 2026-01-28 01:01:43.456911 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 1.97s 2026-01-28 01:01:43.456924 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.77s 2026-01-28 01:01:43.456937 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.99s 2026-01-28 01:01:43.456951 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.93s 2026-01-28 01:01:43.456965 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.87s 2026-01-28 01:01:43.456978 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.84s 2026-01-28 01:01:43.456992 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.83s 2026-01-28 01:01:43.457006 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.70s 2026-01-28 01:01:43.457019 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.69s 2026-01-28 01:01:43.457032 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.67s 2026-01-28 01:01:43.457040 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.66s 2026-01-28 01:01:43.457053 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2026-01-28 01:01:43.457061 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.64s 2026-01-28 01:01:43.457069 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.62s 2026-01-28 01:01:43.457077 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.59s 2026-01-28 01:01:43.457085 | orchestrator | 2026-01-28 01:01:43 | INFO  | Task 50c089a7-9f98-4ce5-a16a-85a41fea12ea is in state STARTED 2026-01-28 01:01:43.457093 | orchestrator | 2026-01-28 01:01:43 | INFO  | Task 0fbda35b-09ff-4746-929c-9c213ffe2ba4 is in state SUCCESS 2026-01-28 01:01:43.457110 | orchestrator | 2026-01-28 01:01:43 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:46.490565 | orchestrator | 2026-01-28 01:01:46 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:46.493427 | orchestrator | 2026-01-28 01:01:46 | INFO  | Task 50c089a7-9f98-4ce5-a16a-85a41fea12ea is in state STARTED 2026-01-28 01:01:46.493493 | orchestrator | 2026-01-28 01:01:46 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:49.540094 | orchestrator | 2026-01-28 01:01:49 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:49.541282 | orchestrator | 2026-01-28 01:01:49 | INFO  | Task 50c089a7-9f98-4ce5-a16a-85a41fea12ea is in state STARTED 2026-01-28 01:01:49.541314 | orchestrator | 2026-01-28 01:01:49 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:52.575947 | orchestrator | 2026-01-28 01:01:52 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:52.578242 | orchestrator | 2026-01-28 01:01:52 | INFO  | Task 50c089a7-9f98-4ce5-a16a-85a41fea12ea is in state STARTED 2026-01-28 01:01:52.578314 | orchestrator | 2026-01-28 01:01:52 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:55.618666 | orchestrator | 2026-01-28 01:01:55 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:55.619535 | orchestrator | 2026-01-28 01:01:55 | INFO  | Task 50c089a7-9f98-4ce5-a16a-85a41fea12ea is in state STARTED 2026-01-28 01:01:55.619566 | orchestrator | 2026-01-28 01:01:55 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:01:58.659953 | orchestrator | 2026-01-28 01:01:58 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:01:58.662228 | orchestrator | 2026-01-28 01:01:58 | INFO  | Task 50c089a7-9f98-4ce5-a16a-85a41fea12ea is in state STARTED 2026-01-28 01:01:58.662277 | orchestrator | 2026-01-28 01:01:58 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:01.711155 | orchestrator | 2026-01-28 01:02:01 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:02:01.711560 | orchestrator | 2026-01-28 01:02:01 | INFO  | Task 50c089a7-9f98-4ce5-a16a-85a41fea12ea is in state STARTED 2026-01-28 01:02:01.711738 | orchestrator | 2026-01-28 01:02:01 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:04.743739 | orchestrator | 2026-01-28 01:02:04 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:02:04.746130 | orchestrator | 2026-01-28 01:02:04 | INFO  | Task 50c089a7-9f98-4ce5-a16a-85a41fea12ea is in state STARTED 2026-01-28 01:02:04.746915 | orchestrator | 2026-01-28 01:02:04 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:07.796623 | orchestrator | 2026-01-28 01:02:07 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:02:07.797626 | orchestrator | 2026-01-28 01:02:07 | INFO  | Task 50c089a7-9f98-4ce5-a16a-85a41fea12ea is in state STARTED 2026-01-28 01:02:07.797937 | orchestrator | 2026-01-28 01:02:07 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:10.835107 | orchestrator | 2026-01-28 01:02:10 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:02:10.837223 | orchestrator | 2026-01-28 01:02:10 | INFO  | Task 50c089a7-9f98-4ce5-a16a-85a41fea12ea is in state STARTED 2026-01-28 01:02:10.837289 | orchestrator | 2026-01-28 01:02:10 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:13.880377 | orchestrator | 2026-01-28 01:02:13 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:02:13.881609 | orchestrator | 2026-01-28 01:02:13 | INFO  | Task 50c089a7-9f98-4ce5-a16a-85a41fea12ea is in state STARTED 2026-01-28 01:02:13.881660 | orchestrator | 2026-01-28 01:02:13 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:16.918313 | orchestrator | 2026-01-28 01:02:16 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:02:16.919637 | orchestrator | 2026-01-28 01:02:16 | INFO  | Task 50c089a7-9f98-4ce5-a16a-85a41fea12ea is in state STARTED 2026-01-28 01:02:16.919985 | orchestrator | 2026-01-28 01:02:16 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:19.961977 | orchestrator | 2026-01-28 01:02:19 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state STARTED 2026-01-28 01:02:19.963543 | orchestrator | 2026-01-28 01:02:19 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:02:19.965188 | orchestrator | 2026-01-28 01:02:19 | INFO  | Task 50c089a7-9f98-4ce5-a16a-85a41fea12ea is in state SUCCESS 2026-01-28 01:02:19.965245 | orchestrator | 2026-01-28 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:23.005817 | orchestrator | 2026-01-28 01:02:23 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state STARTED 2026-01-28 01:02:23.007434 | orchestrator | 2026-01-28 01:02:23 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:02:23.007474 | orchestrator | 2026-01-28 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:26.046395 | orchestrator | 2026-01-28 01:02:26 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state STARTED 2026-01-28 01:02:26.047687 | orchestrator | 2026-01-28 01:02:26 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:02:26.047746 | orchestrator | 2026-01-28 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:29.086610 | orchestrator | 2026-01-28 01:02:29 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state STARTED 2026-01-28 01:02:29.087583 | orchestrator | 2026-01-28 01:02:29 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:02:29.087631 | orchestrator | 2026-01-28 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:32.124737 | orchestrator | 2026-01-28 01:02:32 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state STARTED 2026-01-28 01:02:32.126145 | orchestrator | 2026-01-28 01:02:32 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:02:32.126213 | orchestrator | 2026-01-28 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:35.166132 | orchestrator | 2026-01-28 01:02:35 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state STARTED 2026-01-28 01:02:35.168054 | orchestrator | 2026-01-28 01:02:35 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:02:35.168397 | orchestrator | 2026-01-28 01:02:35 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:38.213132 | orchestrator | 2026-01-28 01:02:38 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state STARTED 2026-01-28 01:02:38.215586 | orchestrator | 2026-01-28 01:02:38 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:02:38.215645 | orchestrator | 2026-01-28 01:02:38 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:41.262619 | orchestrator | 2026-01-28 01:02:41 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state STARTED 2026-01-28 01:02:41.265190 | orchestrator | 2026-01-28 01:02:41 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state STARTED 2026-01-28 01:02:41.265489 | orchestrator | 2026-01-28 01:02:41 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:44.303514 | orchestrator | 2026-01-28 01:02:44 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state STARTED 2026-01-28 01:02:44.304976 | orchestrator | 2026-01-28 01:02:44 | INFO  | Task a76ec30c-0579-42c5-abcf-5166bf5084cf is in state SUCCESS 2026-01-28 01:02:44.309021 | orchestrator | 2026-01-28 01:02:44.309062 | orchestrator | 2026-01-28 01:02:44.309068 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-01-28 01:02:44.309074 | orchestrator | 2026-01-28 01:02:44.309079 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-01-28 01:02:44.309084 | orchestrator | Wednesday 28 January 2026 01:01:46 +0000 (0:00:00.145) 0:00:00.145 ***** 2026-01-28 01:02:44.309090 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-28 01:02:44.309096 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-28 01:02:44.309101 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-28 01:02:44.309107 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-28 01:02:44.309112 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-28 01:02:44.309128 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-28 01:02:44.309133 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-28 01:02:44.309138 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-28 01:02:44.309143 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-28 01:02:44.309147 | orchestrator | 2026-01-28 01:02:44.309152 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-01-28 01:02:44.309156 | orchestrator | Wednesday 28 January 2026 01:01:51 +0000 (0:00:04.996) 0:00:05.141 ***** 2026-01-28 01:02:44.309178 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-01-28 01:02:44.309183 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-28 01:02:44.309189 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-28 01:02:44.309193 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-01-28 01:02:44.309197 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-01-28 01:02:44.309201 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-01-28 01:02:44.309205 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-01-28 01:02:44.309208 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-01-28 01:02:44.309212 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-01-28 01:02:44.309216 | orchestrator | 2026-01-28 01:02:44.309220 | orchestrator | TASK [Create share directory] ************************************************** 2026-01-28 01:02:44.309225 | orchestrator | Wednesday 28 January 2026 01:01:55 +0000 (0:00:03.991) 0:00:09.132 ***** 2026-01-28 01:02:44.309230 | orchestrator | changed: [testbed-manager -> localhost] 2026-01-28 01:02:44.309277 | orchestrator | 2026-01-28 01:02:44.309281 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-01-28 01:02:44.309285 | orchestrator | Wednesday 28 January 2026 01:01:56 +0000 (0:00:00.895) 0:00:10.028 ***** 2026-01-28 01:02:44.309304 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-01-28 01:02:44.309309 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-28 01:02:44.309313 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-28 01:02:44.309317 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-01-28 01:02:44.309321 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-01-28 01:02:44.309339 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-01-28 01:02:44.309361 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-01-28 01:02:44.309366 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-01-28 01:02:44.309370 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-01-28 01:02:44.309374 | orchestrator | 2026-01-28 01:02:44.309378 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-01-28 01:02:44.309382 | orchestrator | Wednesday 28 January 2026 01:02:07 +0000 (0:00:11.748) 0:00:21.777 ***** 2026-01-28 01:02:44.309386 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-01-28 01:02:44.309390 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-01-28 01:02:44.309394 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-28 01:02:44.309398 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-01-28 01:02:44.309438 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-28 01:02:44.309443 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-01-28 01:02:44.309447 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-01-28 01:02:44.309451 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-01-28 01:02:44.309455 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-01-28 01:02:44.309459 | orchestrator | 2026-01-28 01:02:44.309462 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-01-28 01:02:44.309466 | orchestrator | Wednesday 28 January 2026 01:02:11 +0000 (0:00:03.686) 0:00:25.463 ***** 2026-01-28 01:02:44.309471 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-01-28 01:02:44.309475 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-28 01:02:44.309482 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-28 01:02:44.309487 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-01-28 01:02:44.309491 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-01-28 01:02:44.309508 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-01-28 01:02:44.309512 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-01-28 01:02:44.309517 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-01-28 01:02:44.309520 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-01-28 01:02:44.309524 | orchestrator | 2026-01-28 01:02:44.309528 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:02:44.309533 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:02:44.309537 | orchestrator | 2026-01-28 01:02:44.309545 | orchestrator | 2026-01-28 01:02:44.309549 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:02:44.309553 | orchestrator | Wednesday 28 January 2026 01:02:17 +0000 (0:00:06.273) 0:00:31.736 ***** 2026-01-28 01:02:44.309557 | orchestrator | =============================================================================== 2026-01-28 01:02:44.309606 | orchestrator | Write ceph keys to the share directory --------------------------------- 11.75s 2026-01-28 01:02:44.309610 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.27s 2026-01-28 01:02:44.309614 | orchestrator | Check if ceph keys exist ------------------------------------------------ 5.00s 2026-01-28 01:02:44.309618 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.99s 2026-01-28 01:02:44.309622 | orchestrator | Check if target directories exist --------------------------------------- 3.69s 2026-01-28 01:02:44.309626 | orchestrator | Create share directory -------------------------------------------------- 0.90s 2026-01-28 01:02:44.309630 | orchestrator | 2026-01-28 01:02:44.309634 | orchestrator | 2026-01-28 01:02:44.309638 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 01:02:44.309642 | orchestrator | 2026-01-28 01:02:44.309646 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 01:02:44.309650 | orchestrator | Wednesday 28 January 2026 01:00:14 +0000 (0:00:00.252) 0:00:00.252 ***** 2026-01-28 01:02:44.309654 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:02:44.309658 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:02:44.309662 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:02:44.309666 | orchestrator | 2026-01-28 01:02:44.309670 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 01:02:44.309674 | orchestrator | Wednesday 28 January 2026 01:00:14 +0000 (0:00:00.298) 0:00:00.551 ***** 2026-01-28 01:02:44.309678 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-28 01:02:44.309683 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-28 01:02:44.309686 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-28 01:02:44.309690 | orchestrator | 2026-01-28 01:02:44.309695 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-01-28 01:02:44.309699 | orchestrator | 2026-01-28 01:02:44.309703 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-28 01:02:44.309707 | orchestrator | Wednesday 28 January 2026 01:00:14 +0000 (0:00:00.468) 0:00:01.019 ***** 2026-01-28 01:02:44.309711 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:02:44.309715 | orchestrator | 2026-01-28 01:02:44.309719 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-01-28 01:02:44.309723 | orchestrator | Wednesday 28 January 2026 01:00:15 +0000 (0:00:00.629) 0:00:01.648 ***** 2026-01-28 01:02:44.309737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-28 01:02:44.309747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-28 01:02:44.309756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-28 01:02:44.309761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-28 01:02:44.309766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-28 01:02:44.309776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-28 01:02:44.309783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-28 01:02:44.309791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-28 01:02:44.309795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-28 01:02:44.309800 | orchestrator | 2026-01-28 01:02:44.309804 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-01-28 01:02:44.309808 | orchestrator | Wednesday 28 January 2026 01:00:17 +0000 (0:00:01.875) 0:00:03.524 ***** 2026-01-28 01:02:44.309812 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:02:44.309816 | orchestrator | 2026-01-28 01:02:44.309820 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-01-28 01:02:44.309824 | orchestrator | Wednesday 28 January 2026 01:00:17 +0000 (0:00:00.142) 0:00:03.666 ***** 2026-01-28 01:02:44.309828 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:02:44.309868 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:02:44.309873 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:02:44.309877 | orchestrator | 2026-01-28 01:02:44.309881 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-01-28 01:02:44.309885 | orchestrator | Wednesday 28 January 2026 01:00:18 +0000 (0:00:00.463) 0:00:04.129 ***** 2026-01-28 01:02:44.309889 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-28 01:02:44.309893 | orchestrator | 2026-01-28 01:02:44.309897 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-28 01:02:44.309901 | orchestrator | Wednesday 28 January 2026 01:00:18 +0000 (0:00:00.828) 0:00:04.958 ***** 2026-01-28 01:02:44.309904 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:02:44.309908 | orchestrator | 2026-01-28 01:02:44.309912 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-01-28 01:02:44.309916 | orchestrator | Wednesday 28 January 2026 01:00:19 +0000 (0:00:00.526) 0:00:05.485 ***** 2026-01-28 01:02:44.309925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-28 01:02:44.309936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-28 01:02:44.309941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-28 01:02:44.309945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-28 01:02:44.309949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-28 01:02:44.309956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-28 01:02:44.309964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-28 01:02:44.309971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-28 01:02:44.309975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-28 01:02:44.309979 | orchestrator | 2026-01-28 01:02:44.309983 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-01-28 01:02:44.309987 | orchestrator | Wednesday 28 January 2026 01:00:22 +0000 (0:00:03.274) 0:00:08.759 ***** 2026-01-28 01:02:44.309992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-28 01:02:44.309996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-28 01:02:44.310010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-28 01:02:44.310048 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:02:44.310059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-28 01:02:44.310064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-28 01:02:44.310069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-28 01:02:44.310073 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:02:44.310077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-28 01:02:44.310090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-28 01:02:44.310098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-28 01:02:44.310105 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:02:44.310111 | orchestrator | 2026-01-28 01:02:44.310118 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-01-28 01:02:44.310123 | orchestrator | Wednesday 28 January 2026 01:00:23 +0000 (0:00:00.551) 0:00:09.311 ***** 2026-01-28 01:02:44.310131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-28 01:02:44.310142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-28 01:02:44.310150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-28 01:02:44.310161 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:02:44.310458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-28 01:02:44.310489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-28 01:02:44.310495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-28 01:02:44.310500 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:02:44.310504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-28 01:02:44.310509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-28 01:02:44.310523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-28 01:02:44.310528 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:02:44.310532 | orchestrator | 2026-01-28 01:02:44.310536 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-01-28 01:02:44.310545 | orchestrator | Wednesday 28 January 2026 01:00:24 +0000 (0:00:00.736) 0:00:10.048 ***** 2026-01-28 01:02:44.310551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-28 01:02:44.310556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-28 01:02:44.310561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-28 01:02:44.310569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-28 01:02:44.310577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-28 01:02:44.310584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-28 01:02:44.310588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-28 01:02:44.310592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-28 01:02:44.310597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-28 01:02:44.310606 | orchestrator | 2026-01-28 01:02:44.310613 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-01-28 01:02:44.310619 | orchestrator | Wednesday 28 January 2026 01:00:27 +0000 (0:00:03.298) 0:00:13.346 ***** 2026-01-28 01:02:44.310626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-28 01:02:44.310637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-28 01:02:44.310648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-28 01:02:44.310656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-28 01:02:44.310668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-28 01:02:44.310676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-28 01:02:44.310688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-28 01:02:44.310698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-28 01:02:44.310707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-28 01:02:44.310714 | orchestrator | 2026-01-28 01:02:44.310721 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-01-28 01:02:44.310728 | orchestrator | Wednesday 28 January 2026 01:00:33 +0000 (0:00:05.734) 0:00:19.081 ***** 2026-01-28 01:02:44.310735 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:02:44.310742 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:02:44.310754 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:02:44.310761 | orchestrator | 2026-01-28 01:02:44.310766 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-01-28 01:02:44.310770 | orchestrator | Wednesday 28 January 2026 01:00:34 +0000 (0:00:01.531) 0:00:20.612 ***** 2026-01-28 01:02:44.310774 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:02:44.310778 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:02:44.310782 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:02:44.310786 | orchestrator | 2026-01-28 01:02:44.310790 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-01-28 01:02:44.310793 | orchestrator | Wednesday 28 January 2026 01:00:35 +0000 (0:00:00.528) 0:00:21.140 ***** 2026-01-28 01:02:44.310797 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:02:44.310801 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:02:44.310807 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:02:44.310814 | orchestrator | 2026-01-28 01:02:44.310820 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-01-28 01:02:44.310826 | orchestrator | Wednesday 28 January 2026 01:00:35 +0000 (0:00:00.300) 0:00:21.441 ***** 2026-01-28 01:02:44.310872 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:02:44.310880 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:02:44.310884 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:02:44.310888 | orchestrator | 2026-01-28 01:02:44.310892 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-01-28 01:02:44.310896 | orchestrator | Wednesday 28 January 2026 01:00:35 +0000 (0:00:00.483) 0:00:21.924 ***** 2026-01-28 01:02:44.310900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-28 01:02:44.310910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-28 01:02:44.310918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-28 01:02:44.310922 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:02:44.310931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-28 01:02:44.310935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-01-28 01:02:44.310940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-28 01:02:44.310947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-01-28 01:02:44.310954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-28 01:02:44.310961 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:02:44.310966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-01-28 01:02:44.310970 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:02:44.310974 | orchestrator | 2026-01-28 01:02:44.310978 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-28 01:02:44.310982 | orchestrator | Wednesday 28 January 2026 01:00:36 +0000 (0:00:00.656) 0:00:22.581 ***** 2026-01-28 01:02:44.310986 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:02:44.310989 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:02:44.310993 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:02:44.310997 | orchestrator | 2026-01-28 01:02:44.311001 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-01-28 01:02:44.311005 | orchestrator | Wednesday 28 January 2026 01:00:36 +0000 (0:00:00.285) 0:00:22.866 ***** 2026-01-28 01:02:44.311009 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-28 01:02:44.311015 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-28 01:02:44.311019 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-01-28 01:02:44.311023 | orchestrator | 2026-01-28 01:02:44.311027 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-01-28 01:02:44.311031 | orchestrator | Wednesday 28 January 2026 01:00:38 +0000 (0:00:01.608) 0:00:24.474 ***** 2026-01-28 01:02:44.311034 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-28 01:02:44.311038 | orchestrator | 2026-01-28 01:02:44.311042 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-01-28 01:02:44.311046 | orchestrator | Wednesday 28 January 2026 01:00:39 +0000 (0:00:00.900) 0:00:25.375 ***** 2026-01-28 01:02:44.311050 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:02:44.311054 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:02:44.311058 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:02:44.311062 | orchestrator | 2026-01-28 01:02:44.311066 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-01-28 01:02:44.311070 | orchestrator | Wednesday 28 January 2026 01:00:40 +0000 (0:00:00.896) 0:00:26.272 ***** 2026-01-28 01:02:44.311074 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-28 01:02:44.311078 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-28 01:02:44.311082 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-28 01:02:44.311085 | orchestrator | 2026-01-28 01:02:44.311089 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-01-28 01:02:44.311093 | orchestrator | Wednesday 28 January 2026 01:00:41 +0000 (0:00:01.087) 0:00:27.359 ***** 2026-01-28 01:02:44.311097 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:02:44.311101 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:02:44.311105 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:02:44.311109 | orchestrator | 2026-01-28 01:02:44.311113 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-01-28 01:02:44.311117 | orchestrator | Wednesday 28 January 2026 01:00:41 +0000 (0:00:00.331) 0:00:27.691 ***** 2026-01-28 01:02:44.311121 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-28 01:02:44.311125 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-28 01:02:44.311134 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-01-28 01:02:44.311138 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-28 01:02:44.311149 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-28 01:02:44.311153 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-01-28 01:02:44.311157 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-28 01:02:44.311161 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-28 01:02:44.311165 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-01-28 01:02:44.311169 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-28 01:02:44.311173 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-28 01:02:44.311177 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-01-28 01:02:44.311183 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-28 01:02:44.311188 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-28 01:02:44.311192 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-01-28 01:02:44.311196 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-28 01:02:44.311200 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-28 01:02:44.311204 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-28 01:02:44.311208 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-28 01:02:44.311212 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-28 01:02:44.311216 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-28 01:02:44.311219 | orchestrator | 2026-01-28 01:02:44.311224 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-01-28 01:02:44.311228 | orchestrator | Wednesday 28 January 2026 01:00:50 +0000 (0:00:08.660) 0:00:36.351 ***** 2026-01-28 01:02:44.311232 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-28 01:02:44.311236 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-28 01:02:44.311240 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-28 01:02:44.311244 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-28 01:02:44.311248 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-28 01:02:44.311251 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-28 01:02:44.311255 | orchestrator | 2026-01-28 01:02:44.311259 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-01-28 01:02:44.311263 | orchestrator | Wednesday 28 January 2026 01:00:52 +0000 (0:00:02.619) 0:00:38.971 ***** 2026-01-28 01:02:44.311268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-28 01:02:44.311278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-28 01:02:44.311285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-01-28 01:02:44.311290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-28 01:02:44.311294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-28 01:02:44.311301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-01-28 01:02:44.311305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-28 01:02:44.311312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-28 01:02:44.311319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-01-28 01:02:44.311323 | orchestrator | 2026-01-28 01:02:44.311327 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-28 01:02:44.311331 | orchestrator | Wednesday 28 January 2026 01:00:55 +0000 (0:00:02.677) 0:00:41.649 ***** 2026-01-28 01:02:44.311335 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:02:44.311339 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:02:44.311343 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:02:44.311347 | orchestrator | 2026-01-28 01:02:44.311351 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-01-28 01:02:44.311355 | orchestrator | Wednesday 28 January 2026 01:00:55 +0000 (0:00:00.286) 0:00:41.935 ***** 2026-01-28 01:02:44.311359 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:02:44.311364 | orchestrator | 2026-01-28 01:02:44.311368 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-01-28 01:02:44.311373 | orchestrator | Wednesday 28 January 2026 01:00:58 +0000 (0:00:02.238) 0:00:44.174 ***** 2026-01-28 01:02:44.311378 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:02:44.311382 | orchestrator | 2026-01-28 01:02:44.311387 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-01-28 01:02:44.311391 | orchestrator | Wednesday 28 January 2026 01:01:00 +0000 (0:00:02.118) 0:00:46.292 ***** 2026-01-28 01:02:44.311396 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:02:44.311401 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:02:44.311408 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:02:44.311413 | orchestrator | 2026-01-28 01:02:44.311417 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-01-28 01:02:44.311422 | orchestrator | Wednesday 28 January 2026 01:01:01 +0000 (0:00:01.046) 0:00:47.339 ***** 2026-01-28 01:02:44.311427 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:02:44.311432 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:02:44.311436 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:02:44.311441 | orchestrator | 2026-01-28 01:02:44.311445 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-01-28 01:02:44.311450 | orchestrator | Wednesday 28 January 2026 01:01:01 +0000 (0:00:00.325) 0:00:47.664 ***** 2026-01-28 01:02:44.311454 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:02:44.311459 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:02:44.311464 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:02:44.311468 | orchestrator | 2026-01-28 01:02:44.311473 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-01-28 01:02:44.311477 | orchestrator | Wednesday 28 January 2026 01:01:02 +0000 (0:00:00.402) 0:00:48.066 ***** 2026-01-28 01:02:44.311482 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:02:44.311486 | orchestrator | 2026-01-28 01:02:44.311491 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-01-28 01:02:44.311495 | orchestrator | Wednesday 28 January 2026 01:01:16 +0000 (0:00:14.152) 0:01:02.218 ***** 2026-01-28 01:02:44.311500 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:02:44.311504 | orchestrator | 2026-01-28 01:02:44.311509 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-28 01:02:44.311514 | orchestrator | Wednesday 28 January 2026 01:01:27 +0000 (0:00:11.016) 0:01:13.235 ***** 2026-01-28 01:02:44.311519 | orchestrator | 2026-01-28 01:02:44.311524 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-28 01:02:44.311528 | orchestrator | Wednesday 28 January 2026 01:01:27 +0000 (0:00:00.068) 0:01:13.304 ***** 2026-01-28 01:02:44.311533 | orchestrator | 2026-01-28 01:02:44.311538 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-01-28 01:02:44.311542 | orchestrator | Wednesday 28 January 2026 01:01:27 +0000 (0:00:00.077) 0:01:13.382 ***** 2026-01-28 01:02:44.311547 | orchestrator | 2026-01-28 01:02:44.311552 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-01-28 01:02:44.311556 | orchestrator | Wednesday 28 January 2026 01:01:27 +0000 (0:00:00.069) 0:01:13.451 ***** 2026-01-28 01:02:44.311561 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:02:44.311566 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:02:44.311571 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:02:44.311576 | orchestrator | 2026-01-28 01:02:44.311580 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-01-28 01:02:44.311585 | orchestrator | Wednesday 28 January 2026 01:01:36 +0000 (0:00:08.923) 0:01:22.375 ***** 2026-01-28 01:02:44.311590 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:02:44.311594 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:02:44.311598 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:02:44.311602 | orchestrator | 2026-01-28 01:02:44.311608 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-01-28 01:02:44.311612 | orchestrator | Wednesday 28 January 2026 01:01:40 +0000 (0:00:04.469) 0:01:26.845 ***** 2026-01-28 01:02:44.311616 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:02:44.311620 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:02:44.311624 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:02:44.311628 | orchestrator | 2026-01-28 01:02:44.311632 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-28 01:02:44.311636 | orchestrator | Wednesday 28 January 2026 01:01:51 +0000 (0:00:10.702) 0:01:37.547 ***** 2026-01-28 01:02:44.311640 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:02:44.311644 | orchestrator | 2026-01-28 01:02:44.311651 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-01-28 01:02:44.311656 | orchestrator | Wednesday 28 January 2026 01:01:52 +0000 (0:00:00.616) 0:01:38.163 ***** 2026-01-28 01:02:44.311659 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:02:44.311663 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:02:44.311667 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:02:44.311671 | orchestrator | 2026-01-28 01:02:44.311678 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-01-28 01:02:44.311682 | orchestrator | Wednesday 28 January 2026 01:01:52 +0000 (0:00:00.720) 0:01:38.884 ***** 2026-01-28 01:02:44.311685 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:02:44.311689 | orchestrator | 2026-01-28 01:02:44.311693 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-01-28 01:02:44.311697 | orchestrator | Wednesday 28 January 2026 01:01:54 +0000 (0:00:01.509) 0:01:40.393 ***** 2026-01-28 01:02:44.311701 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-01-28 01:02:44.311705 | orchestrator | 2026-01-28 01:02:44.311709 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-01-28 01:02:44.311713 | orchestrator | Wednesday 28 January 2026 01:02:06 +0000 (0:00:11.896) 0:01:52.289 ***** 2026-01-28 01:02:44.311717 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-01-28 01:02:44.311721 | orchestrator | 2026-01-28 01:02:44.311725 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-01-28 01:02:44.311729 | orchestrator | Wednesday 28 January 2026 01:02:32 +0000 (0:00:26.170) 0:02:18.460 ***** 2026-01-28 01:02:44.311733 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-01-28 01:02:44.311737 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-01-28 01:02:44.311740 | orchestrator | 2026-01-28 01:02:44.311744 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-01-28 01:02:44.311748 | orchestrator | Wednesday 28 January 2026 01:02:38 +0000 (0:00:05.987) 0:02:24.447 ***** 2026-01-28 01:02:44.311752 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:02:44.311756 | orchestrator | 2026-01-28 01:02:44.311760 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-01-28 01:02:44.311764 | orchestrator | Wednesday 28 January 2026 01:02:38 +0000 (0:00:00.105) 0:02:24.552 ***** 2026-01-28 01:02:44.311768 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:02:44.311772 | orchestrator | 2026-01-28 01:02:44.311776 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-01-28 01:02:44.311780 | orchestrator | Wednesday 28 January 2026 01:02:38 +0000 (0:00:00.111) 0:02:24.664 ***** 2026-01-28 01:02:44.311784 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:02:44.311788 | orchestrator | 2026-01-28 01:02:44.311792 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-01-28 01:02:44.311796 | orchestrator | Wednesday 28 January 2026 01:02:38 +0000 (0:00:00.132) 0:02:24.796 ***** 2026-01-28 01:02:44.311799 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:02:44.311803 | orchestrator | 2026-01-28 01:02:44.311807 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-01-28 01:02:44.311811 | orchestrator | Wednesday 28 January 2026 01:02:39 +0000 (0:00:00.411) 0:02:25.207 ***** 2026-01-28 01:02:44.311815 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:02:44.311819 | orchestrator | 2026-01-28 01:02:44.311823 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-01-28 01:02:44.311827 | orchestrator | Wednesday 28 January 2026 01:02:41 +0000 (0:00:02.635) 0:02:27.843 ***** 2026-01-28 01:02:44.311831 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:02:44.311851 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:02:44.311858 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:02:44.311864 | orchestrator | 2026-01-28 01:02:44.311871 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:02:44.311883 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-01-28 01:02:44.311890 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-28 01:02:44.311894 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-28 01:02:44.311898 | orchestrator | 2026-01-28 01:02:44.311902 | orchestrator | 2026-01-28 01:02:44.311905 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:02:44.311909 | orchestrator | Wednesday 28 January 2026 01:02:42 +0000 (0:00:00.417) 0:02:28.260 ***** 2026-01-28 01:02:44.311913 | orchestrator | =============================================================================== 2026-01-28 01:02:44.311917 | orchestrator | service-ks-register : keystone | Creating services --------------------- 26.17s 2026-01-28 01:02:44.311921 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.15s 2026-01-28 01:02:44.311928 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.90s 2026-01-28 01:02:44.311932 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.02s 2026-01-28 01:02:44.311936 | orchestrator | keystone : Restart keystone container ---------------------------------- 10.70s 2026-01-28 01:02:44.311940 | orchestrator | keystone : Restart keystone-ssh container ------------------------------- 8.92s 2026-01-28 01:02:44.311944 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.66s 2026-01-28 01:02:44.311948 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 5.99s 2026-01-28 01:02:44.311952 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.73s 2026-01-28 01:02:44.311956 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.47s 2026-01-28 01:02:44.311960 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.30s 2026-01-28 01:02:44.311964 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.27s 2026-01-28 01:02:44.311970 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.68s 2026-01-28 01:02:44.311974 | orchestrator | keystone : Creating default user role ----------------------------------- 2.64s 2026-01-28 01:02:44.311978 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.62s 2026-01-28 01:02:44.311982 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.24s 2026-01-28 01:02:44.311986 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.12s 2026-01-28 01:02:44.311990 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.88s 2026-01-28 01:02:44.311994 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.61s 2026-01-28 01:02:44.311998 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.53s 2026-01-28 01:02:44.312002 | orchestrator | 2026-01-28 01:02:44 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:02:44.312006 | orchestrator | 2026-01-28 01:02:44 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:02:44.312010 | orchestrator | 2026-01-28 01:02:44 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:02:44.312014 | orchestrator | 2026-01-28 01:02:44 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:02:44.312018 | orchestrator | 2026-01-28 01:02:44 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:47.343337 | orchestrator | 2026-01-28 01:02:47 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state STARTED 2026-01-28 01:02:47.345953 | orchestrator | 2026-01-28 01:02:47 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:02:47.346383 | orchestrator | 2026-01-28 01:02:47 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:02:47.348010 | orchestrator | 2026-01-28 01:02:47 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:02:47.348398 | orchestrator | 2026-01-28 01:02:47 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:02:47.348422 | orchestrator | 2026-01-28 01:02:47 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:50.378863 | orchestrator | 2026-01-28 01:02:50 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state STARTED 2026-01-28 01:02:50.380176 | orchestrator | 2026-01-28 01:02:50 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:02:50.381925 | orchestrator | 2026-01-28 01:02:50 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:02:50.383599 | orchestrator | 2026-01-28 01:02:50 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:02:50.387947 | orchestrator | 2026-01-28 01:02:50 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:02:50.388169 | orchestrator | 2026-01-28 01:02:50 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:53.431223 | orchestrator | 2026-01-28 01:02:53 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state STARTED 2026-01-28 01:02:53.432411 | orchestrator | 2026-01-28 01:02:53 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:02:53.434407 | orchestrator | 2026-01-28 01:02:53 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:02:53.435794 | orchestrator | 2026-01-28 01:02:53 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:02:53.437614 | orchestrator | 2026-01-28 01:02:53 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:02:53.437651 | orchestrator | 2026-01-28 01:02:53 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:56.478438 | orchestrator | 2026-01-28 01:02:56 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state STARTED 2026-01-28 01:02:56.479303 | orchestrator | 2026-01-28 01:02:56 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:02:56.481116 | orchestrator | 2026-01-28 01:02:56 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:02:56.482726 | orchestrator | 2026-01-28 01:02:56 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:02:56.484423 | orchestrator | 2026-01-28 01:02:56 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:02:56.484503 | orchestrator | 2026-01-28 01:02:56 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:02:59.526085 | orchestrator | 2026-01-28 01:02:59 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state STARTED 2026-01-28 01:02:59.527900 | orchestrator | 2026-01-28 01:02:59 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:02:59.530259 | orchestrator | 2026-01-28 01:02:59 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:02:59.532533 | orchestrator | 2026-01-28 01:02:59 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:02:59.534904 | orchestrator | 2026-01-28 01:02:59 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:02:59.534979 | orchestrator | 2026-01-28 01:02:59 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:02.571762 | orchestrator | 2026-01-28 01:03:02 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state STARTED 2026-01-28 01:03:02.573961 | orchestrator | 2026-01-28 01:03:02 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:02.575521 | orchestrator | 2026-01-28 01:03:02 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:03:02.577112 | orchestrator | 2026-01-28 01:03:02 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:02.578768 | orchestrator | 2026-01-28 01:03:02 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:02.578796 | orchestrator | 2026-01-28 01:03:02 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:05.621381 | orchestrator | 2026-01-28 01:03:05 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state STARTED 2026-01-28 01:03:05.621538 | orchestrator | 2026-01-28 01:03:05 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:05.622506 | orchestrator | 2026-01-28 01:03:05 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:03:05.623546 | orchestrator | 2026-01-28 01:03:05 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:05.624402 | orchestrator | 2026-01-28 01:03:05 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:05.624435 | orchestrator | 2026-01-28 01:03:05 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:08.683022 | orchestrator | 2026-01-28 01:03:08 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state STARTED 2026-01-28 01:03:08.684515 | orchestrator | 2026-01-28 01:03:08 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:08.685887 | orchestrator | 2026-01-28 01:03:08 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:03:08.687157 | orchestrator | 2026-01-28 01:03:08 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:08.688528 | orchestrator | 2026-01-28 01:03:08 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:08.688896 | orchestrator | 2026-01-28 01:03:08 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:11.733370 | orchestrator | 2026-01-28 01:03:11 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state STARTED 2026-01-28 01:03:11.734096 | orchestrator | 2026-01-28 01:03:11 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:11.735514 | orchestrator | 2026-01-28 01:03:11 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:03:11.736367 | orchestrator | 2026-01-28 01:03:11 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:11.737494 | orchestrator | 2026-01-28 01:03:11 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:11.737763 | orchestrator | 2026-01-28 01:03:11 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:14.775472 | orchestrator | 2026-01-28 01:03:14 | INFO  | Task bff837a5-fea5-44c4-9a00-9f63e23877cd is in state SUCCESS 2026-01-28 01:03:14.776236 | orchestrator | 2026-01-28 01:03:14 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:14.777328 | orchestrator | 2026-01-28 01:03:14 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:03:14.778444 | orchestrator | 2026-01-28 01:03:14 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:14.780106 | orchestrator | 2026-01-28 01:03:14 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:14.780166 | orchestrator | 2026-01-28 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:17.811708 | orchestrator | 2026-01-28 01:03:17 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:17.813256 | orchestrator | 2026-01-28 01:03:17 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:03:17.816533 | orchestrator | 2026-01-28 01:03:17 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:17.818156 | orchestrator | 2026-01-28 01:03:17 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:03:17.820589 | orchestrator | 2026-01-28 01:03:17 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:17.820771 | orchestrator | 2026-01-28 01:03:17 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:20.847780 | orchestrator | 2026-01-28 01:03:20 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:20.848787 | orchestrator | 2026-01-28 01:03:20 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:03:20.850600 | orchestrator | 2026-01-28 01:03:20 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:20.851379 | orchestrator | 2026-01-28 01:03:20 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:03:20.852103 | orchestrator | 2026-01-28 01:03:20 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:20.852183 | orchestrator | 2026-01-28 01:03:20 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:23.919903 | orchestrator | 2026-01-28 01:03:23 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:23.919993 | orchestrator | 2026-01-28 01:03:23 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:03:23.921099 | orchestrator | 2026-01-28 01:03:23 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:23.921126 | orchestrator | 2026-01-28 01:03:23 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:03:23.922727 | orchestrator | 2026-01-28 01:03:23 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:23.922751 | orchestrator | 2026-01-28 01:03:23 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:26.961900 | orchestrator | 2026-01-28 01:03:26 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:26.961977 | orchestrator | 2026-01-28 01:03:26 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:03:26.961995 | orchestrator | 2026-01-28 01:03:26 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:26.962011 | orchestrator | 2026-01-28 01:03:26 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:03:26.962075 | orchestrator | 2026-01-28 01:03:26 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:26.962085 | orchestrator | 2026-01-28 01:03:26 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:29.979670 | orchestrator | 2026-01-28 01:03:29 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:29.979778 | orchestrator | 2026-01-28 01:03:29 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:03:29.980253 | orchestrator | 2026-01-28 01:03:29 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:29.981155 | orchestrator | 2026-01-28 01:03:29 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:03:29.981917 | orchestrator | 2026-01-28 01:03:29 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:29.981952 | orchestrator | 2026-01-28 01:03:29 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:33.017867 | orchestrator | 2026-01-28 01:03:33 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:33.017930 | orchestrator | 2026-01-28 01:03:33 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:03:33.017940 | orchestrator | 2026-01-28 01:03:33 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:33.018762 | orchestrator | 2026-01-28 01:03:33 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:03:33.019446 | orchestrator | 2026-01-28 01:03:33 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:33.019545 | orchestrator | 2026-01-28 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:36.040878 | orchestrator | 2026-01-28 01:03:36 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:36.041506 | orchestrator | 2026-01-28 01:03:36 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:03:36.043454 | orchestrator | 2026-01-28 01:03:36 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:36.043478 | orchestrator | 2026-01-28 01:03:36 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:03:36.045071 | orchestrator | 2026-01-28 01:03:36 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:36.045091 | orchestrator | 2026-01-28 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:39.064167 | orchestrator | 2026-01-28 01:03:39 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:39.064452 | orchestrator | 2026-01-28 01:03:39 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:03:39.065117 | orchestrator | 2026-01-28 01:03:39 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:39.065895 | orchestrator | 2026-01-28 01:03:39 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:03:39.067303 | orchestrator | 2026-01-28 01:03:39 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:39.067405 | orchestrator | 2026-01-28 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:42.090540 | orchestrator | 2026-01-28 01:03:42 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:42.090721 | orchestrator | 2026-01-28 01:03:42 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:03:42.091062 | orchestrator | 2026-01-28 01:03:42 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:42.091627 | orchestrator | 2026-01-28 01:03:42 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:03:42.092155 | orchestrator | 2026-01-28 01:03:42 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:42.092257 | orchestrator | 2026-01-28 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:45.115020 | orchestrator | 2026-01-28 01:03:45 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:45.116318 | orchestrator | 2026-01-28 01:03:45 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:03:45.117067 | orchestrator | 2026-01-28 01:03:45 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:45.117502 | orchestrator | 2026-01-28 01:03:45 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:03:45.124694 | orchestrator | 2026-01-28 01:03:45 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:45.124758 | orchestrator | 2026-01-28 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:48.150487 | orchestrator | 2026-01-28 01:03:48 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:48.150987 | orchestrator | 2026-01-28 01:03:48 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state STARTED 2026-01-28 01:03:48.151801 | orchestrator | 2026-01-28 01:03:48 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:48.154866 | orchestrator | 2026-01-28 01:03:48 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:03:48.155478 | orchestrator | 2026-01-28 01:03:48 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:48.155630 | orchestrator | 2026-01-28 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:51.180119 | orchestrator | 2026-01-28 01:03:51 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:03:51.180540 | orchestrator | 2026-01-28 01:03:51 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:51.181168 | orchestrator | 2026-01-28 01:03:51 | INFO  | Task 79ccc28b-8dfc-4f2c-b3c8-09ba5c257e5c is in state SUCCESS 2026-01-28 01:03:51.181623 | orchestrator | 2026-01-28 01:03:51.181642 | orchestrator | 2026-01-28 01:03:51.181651 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-01-28 01:03:51.181660 | orchestrator | 2026-01-28 01:03:51.181668 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-01-28 01:03:51.181677 | orchestrator | Wednesday 28 January 2026 01:02:21 +0000 (0:00:00.207) 0:00:00.207 ***** 2026-01-28 01:03:51.181701 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-01-28 01:03:51.181711 | orchestrator | 2026-01-28 01:03:51.181719 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-01-28 01:03:51.181728 | orchestrator | Wednesday 28 January 2026 01:02:22 +0000 (0:00:00.207) 0:00:00.414 ***** 2026-01-28 01:03:51.181737 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-01-28 01:03:51.181746 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-01-28 01:03:51.181766 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-01-28 01:03:51.181775 | orchestrator | 2026-01-28 01:03:51.181783 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-01-28 01:03:51.181791 | orchestrator | Wednesday 28 January 2026 01:02:23 +0000 (0:00:01.112) 0:00:01.527 ***** 2026-01-28 01:03:51.181799 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-01-28 01:03:51.181808 | orchestrator | 2026-01-28 01:03:51.181816 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-01-28 01:03:51.181852 | orchestrator | Wednesday 28 January 2026 01:02:24 +0000 (0:00:01.263) 0:00:02.791 ***** 2026-01-28 01:03:51.181861 | orchestrator | changed: [testbed-manager] 2026-01-28 01:03:51.181869 | orchestrator | 2026-01-28 01:03:51.181877 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-01-28 01:03:51.181907 | orchestrator | Wednesday 28 January 2026 01:02:25 +0000 (0:00:00.823) 0:00:03.614 ***** 2026-01-28 01:03:51.181915 | orchestrator | changed: [testbed-manager] 2026-01-28 01:03:51.181923 | orchestrator | 2026-01-28 01:03:51.181931 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-01-28 01:03:51.181939 | orchestrator | Wednesday 28 January 2026 01:02:26 +0000 (0:00:00.811) 0:00:04.426 ***** 2026-01-28 01:03:51.181947 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-01-28 01:03:51.181955 | orchestrator | ok: [testbed-manager] 2026-01-28 01:03:51.181963 | orchestrator | 2026-01-28 01:03:51.181971 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-01-28 01:03:51.181978 | orchestrator | Wednesday 28 January 2026 01:03:05 +0000 (0:00:38.906) 0:00:43.333 ***** 2026-01-28 01:03:51.181987 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-01-28 01:03:51.181995 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-01-28 01:03:51.182002 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-01-28 01:03:51.182010 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-01-28 01:03:51.182081 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-01-28 01:03:51.182090 | orchestrator | 2026-01-28 01:03:51.182098 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-01-28 01:03:51.182106 | orchestrator | Wednesday 28 January 2026 01:03:08 +0000 (0:00:03.927) 0:00:47.261 ***** 2026-01-28 01:03:51.182114 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-01-28 01:03:51.182122 | orchestrator | 2026-01-28 01:03:51.182130 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-01-28 01:03:51.182138 | orchestrator | Wednesday 28 January 2026 01:03:09 +0000 (0:00:00.483) 0:00:47.745 ***** 2026-01-28 01:03:51.182146 | orchestrator | skipping: [testbed-manager] 2026-01-28 01:03:51.182154 | orchestrator | 2026-01-28 01:03:51.182162 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-01-28 01:03:51.182170 | orchestrator | Wednesday 28 January 2026 01:03:09 +0000 (0:00:00.123) 0:00:47.868 ***** 2026-01-28 01:03:51.182178 | orchestrator | skipping: [testbed-manager] 2026-01-28 01:03:51.182186 | orchestrator | 2026-01-28 01:03:51.182194 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-01-28 01:03:51.182202 | orchestrator | Wednesday 28 January 2026 01:03:10 +0000 (0:00:00.527) 0:00:48.396 ***** 2026-01-28 01:03:51.182210 | orchestrator | changed: [testbed-manager] 2026-01-28 01:03:51.182218 | orchestrator | 2026-01-28 01:03:51.182226 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-01-28 01:03:51.182234 | orchestrator | Wednesday 28 January 2026 01:03:11 +0000 (0:00:01.397) 0:00:49.793 ***** 2026-01-28 01:03:51.182242 | orchestrator | changed: [testbed-manager] 2026-01-28 01:03:51.182250 | orchestrator | 2026-01-28 01:03:51.182259 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-01-28 01:03:51.182268 | orchestrator | Wednesday 28 January 2026 01:03:12 +0000 (0:00:00.790) 0:00:50.583 ***** 2026-01-28 01:03:51.182278 | orchestrator | changed: [testbed-manager] 2026-01-28 01:03:51.182286 | orchestrator | 2026-01-28 01:03:51.182296 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-01-28 01:03:51.182305 | orchestrator | Wednesday 28 January 2026 01:03:12 +0000 (0:00:00.535) 0:00:51.119 ***** 2026-01-28 01:03:51.182314 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-01-28 01:03:51.182324 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-01-28 01:03:51.182333 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-01-28 01:03:51.182342 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-01-28 01:03:51.182351 | orchestrator | 2026-01-28 01:03:51.182361 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:03:51.182370 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-01-28 01:03:51.182387 | orchestrator | 2026-01-28 01:03:51.182396 | orchestrator | 2026-01-28 01:03:51.182415 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:03:51.182425 | orchestrator | Wednesday 28 January 2026 01:03:14 +0000 (0:00:01.324) 0:00:52.443 ***** 2026-01-28 01:03:51.182434 | orchestrator | =============================================================================== 2026-01-28 01:03:51.182443 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 38.91s 2026-01-28 01:03:51.182452 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.93s 2026-01-28 01:03:51.182461 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.40s 2026-01-28 01:03:51.182471 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.32s 2026-01-28 01:03:51.182480 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.26s 2026-01-28 01:03:51.182489 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.11s 2026-01-28 01:03:51.182499 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.82s 2026-01-28 01:03:51.182508 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.81s 2026-01-28 01:03:51.182517 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.79s 2026-01-28 01:03:51.182526 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.54s 2026-01-28 01:03:51.182536 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.53s 2026-01-28 01:03:51.182545 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2026-01-28 01:03:51.182555 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2026-01-28 01:03:51.182564 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2026-01-28 01:03:51.182574 | orchestrator | 2026-01-28 01:03:51.182583 | orchestrator | 2026-01-28 01:03:51.182592 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-01-28 01:03:51.182601 | orchestrator | 2026-01-28 01:03:51.182611 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-01-28 01:03:51.182620 | orchestrator | Wednesday 28 January 2026 01:02:47 +0000 (0:00:00.086) 0:00:00.086 ***** 2026-01-28 01:03:51.182630 | orchestrator | changed: [localhost] 2026-01-28 01:03:51.182639 | orchestrator | 2026-01-28 01:03:51.182646 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-01-28 01:03:51.182654 | orchestrator | Wednesday 28 January 2026 01:02:48 +0000 (0:00:00.921) 0:00:01.007 ***** 2026-01-28 01:03:51.182662 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-01-28 01:03:51.182670 | orchestrator | changed: [localhost] 2026-01-28 01:03:51.182678 | orchestrator | 2026-01-28 01:03:51.182686 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-01-28 01:03:51.182694 | orchestrator | Wednesday 28 January 2026 01:03:41 +0000 (0:00:52.956) 0:00:53.964 ***** 2026-01-28 01:03:51.182731 | orchestrator | changed: [localhost] 2026-01-28 01:03:51.182740 | orchestrator | 2026-01-28 01:03:51.182748 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 01:03:51.182756 | orchestrator | 2026-01-28 01:03:51.182764 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 01:03:51.182772 | orchestrator | Wednesday 28 January 2026 01:03:47 +0000 (0:00:05.878) 0:00:59.842 ***** 2026-01-28 01:03:51.182802 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:03:51.182812 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:03:51.182840 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:03:51.182848 | orchestrator | 2026-01-28 01:03:51.182856 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 01:03:51.182864 | orchestrator | Wednesday 28 January 2026 01:03:47 +0000 (0:00:00.585) 0:01:00.428 ***** 2026-01-28 01:03:51.182872 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-01-28 01:03:51.182887 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-01-28 01:03:51.182894 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-01-28 01:03:51.182902 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-01-28 01:03:51.182910 | orchestrator | 2026-01-28 01:03:51.182918 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-01-28 01:03:51.182926 | orchestrator | skipping: no hosts matched 2026-01-28 01:03:51.182934 | orchestrator | 2026-01-28 01:03:51.182942 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:03:51.182950 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:03:51.182959 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:03:51.182968 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:03:51.182976 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:03:51.182984 | orchestrator | 2026-01-28 01:03:51.183003 | orchestrator | 2026-01-28 01:03:51.183020 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:03:51.183028 | orchestrator | Wednesday 28 January 2026 01:03:48 +0000 (0:00:00.814) 0:01:01.242 ***** 2026-01-28 01:03:51.183036 | orchestrator | =============================================================================== 2026-01-28 01:03:51.183044 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 52.96s 2026-01-28 01:03:51.183052 | orchestrator | Download ironic-agent kernel -------------------------------------------- 5.88s 2026-01-28 01:03:51.183060 | orchestrator | Ensure the destination directory exists --------------------------------- 0.92s 2026-01-28 01:03:51.183074 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2026-01-28 01:03:51.183082 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.59s 2026-01-28 01:03:51.183168 | orchestrator | 2026-01-28 01:03:51 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:51.183179 | orchestrator | 2026-01-28 01:03:51 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:03:51.183777 | orchestrator | 2026-01-28 01:03:51 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:51.183910 | orchestrator | 2026-01-28 01:03:51 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:54.209935 | orchestrator | 2026-01-28 01:03:54 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:03:54.210619 | orchestrator | 2026-01-28 01:03:54 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:54.212266 | orchestrator | 2026-01-28 01:03:54 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:54.213083 | orchestrator | 2026-01-28 01:03:54 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:03:54.214324 | orchestrator | 2026-01-28 01:03:54 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:54.214440 | orchestrator | 2026-01-28 01:03:54 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:03:57.237563 | orchestrator | 2026-01-28 01:03:57 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:03:57.238291 | orchestrator | 2026-01-28 01:03:57 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:03:57.238733 | orchestrator | 2026-01-28 01:03:57 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:03:57.239460 | orchestrator | 2026-01-28 01:03:57 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:03:57.242186 | orchestrator | 2026-01-28 01:03:57 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:03:57.242278 | orchestrator | 2026-01-28 01:03:57 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:00.266594 | orchestrator | 2026-01-28 01:04:00 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:00.267120 | orchestrator | 2026-01-28 01:04:00 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:00.267962 | orchestrator | 2026-01-28 01:04:00 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:00.268612 | orchestrator | 2026-01-28 01:04:00 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:04:00.269259 | orchestrator | 2026-01-28 01:04:00 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:04:00.269282 | orchestrator | 2026-01-28 01:04:00 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:03.297061 | orchestrator | 2026-01-28 01:04:03 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:03.297451 | orchestrator | 2026-01-28 01:04:03 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:03.298277 | orchestrator | 2026-01-28 01:04:03 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:03.299072 | orchestrator | 2026-01-28 01:04:03 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:04:03.299561 | orchestrator | 2026-01-28 01:04:03 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:04:03.299591 | orchestrator | 2026-01-28 01:04:03 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:06.382344 | orchestrator | 2026-01-28 01:04:06 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:06.382522 | orchestrator | 2026-01-28 01:04:06 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:06.383259 | orchestrator | 2026-01-28 01:04:06 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:06.387027 | orchestrator | 2026-01-28 01:04:06 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:04:06.387811 | orchestrator | 2026-01-28 01:04:06 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:04:06.387952 | orchestrator | 2026-01-28 01:04:06 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:09.420635 | orchestrator | 2026-01-28 01:04:09 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:09.421306 | orchestrator | 2026-01-28 01:04:09 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:09.422611 | orchestrator | 2026-01-28 01:04:09 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:09.424917 | orchestrator | 2026-01-28 01:04:09 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:04:09.425949 | orchestrator | 2026-01-28 01:04:09 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:04:09.426283 | orchestrator | 2026-01-28 01:04:09 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:12.461363 | orchestrator | 2026-01-28 01:04:12 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:12.461947 | orchestrator | 2026-01-28 01:04:12 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:12.462910 | orchestrator | 2026-01-28 01:04:12 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:12.463845 | orchestrator | 2026-01-28 01:04:12 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:04:12.464812 | orchestrator | 2026-01-28 01:04:12 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:04:12.464933 | orchestrator | 2026-01-28 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:15.499880 | orchestrator | 2026-01-28 01:04:15 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:15.500380 | orchestrator | 2026-01-28 01:04:15 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:15.501124 | orchestrator | 2026-01-28 01:04:15 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:15.501964 | orchestrator | 2026-01-28 01:04:15 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:04:15.502799 | orchestrator | 2026-01-28 01:04:15 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:04:15.502897 | orchestrator | 2026-01-28 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:18.533150 | orchestrator | 2026-01-28 01:04:18 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:18.533302 | orchestrator | 2026-01-28 01:04:18 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:18.534612 | orchestrator | 2026-01-28 01:04:18 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:18.535777 | orchestrator | 2026-01-28 01:04:18 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:04:18.537102 | orchestrator | 2026-01-28 01:04:18 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:04:18.537198 | orchestrator | 2026-01-28 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:21.568600 | orchestrator | 2026-01-28 01:04:21 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:21.569637 | orchestrator | 2026-01-28 01:04:21 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:21.571061 | orchestrator | 2026-01-28 01:04:21 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:21.572893 | orchestrator | 2026-01-28 01:04:21 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:04:21.574324 | orchestrator | 2026-01-28 01:04:21 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:04:21.574367 | orchestrator | 2026-01-28 01:04:21 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:24.605153 | orchestrator | 2026-01-28 01:04:24 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:24.605355 | orchestrator | 2026-01-28 01:04:24 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:24.605986 | orchestrator | 2026-01-28 01:04:24 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:24.606586 | orchestrator | 2026-01-28 01:04:24 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:04:24.607286 | orchestrator | 2026-01-28 01:04:24 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:04:24.607365 | orchestrator | 2026-01-28 01:04:24 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:27.636213 | orchestrator | 2026-01-28 01:04:27 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:27.637563 | orchestrator | 2026-01-28 01:04:27 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:27.637597 | orchestrator | 2026-01-28 01:04:27 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:27.639502 | orchestrator | 2026-01-28 01:04:27 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:04:27.639529 | orchestrator | 2026-01-28 01:04:27 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:04:27.639541 | orchestrator | 2026-01-28 01:04:27 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:30.670766 | orchestrator | 2026-01-28 01:04:30 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:30.671007 | orchestrator | 2026-01-28 01:04:30 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:30.671664 | orchestrator | 2026-01-28 01:04:30 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:30.672279 | orchestrator | 2026-01-28 01:04:30 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:04:30.673299 | orchestrator | 2026-01-28 01:04:30 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:04:30.673395 | orchestrator | 2026-01-28 01:04:30 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:33.704026 | orchestrator | 2026-01-28 01:04:33 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:33.704316 | orchestrator | 2026-01-28 01:04:33 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:33.704411 | orchestrator | 2026-01-28 01:04:33 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:33.705088 | orchestrator | 2026-01-28 01:04:33 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:04:33.705655 | orchestrator | 2026-01-28 01:04:33 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:04:33.705757 | orchestrator | 2026-01-28 01:04:33 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:36.728532 | orchestrator | 2026-01-28 01:04:36 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:36.728984 | orchestrator | 2026-01-28 01:04:36 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:36.729861 | orchestrator | 2026-01-28 01:04:36 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:36.732211 | orchestrator | 2026-01-28 01:04:36 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:04:36.732457 | orchestrator | 2026-01-28 01:04:36 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:04:36.732579 | orchestrator | 2026-01-28 01:04:36 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:39.750328 | orchestrator | 2026-01-28 01:04:39 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:39.750761 | orchestrator | 2026-01-28 01:04:39 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:39.751596 | orchestrator | 2026-01-28 01:04:39 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:39.752117 | orchestrator | 2026-01-28 01:04:39 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:04:39.752743 | orchestrator | 2026-01-28 01:04:39 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:04:39.752865 | orchestrator | 2026-01-28 01:04:39 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:42.775877 | orchestrator | 2026-01-28 01:04:42 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:42.776065 | orchestrator | 2026-01-28 01:04:42 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:42.776499 | orchestrator | 2026-01-28 01:04:42 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:42.777075 | orchestrator | 2026-01-28 01:04:42 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:04:42.777955 | orchestrator | 2026-01-28 01:04:42 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:04:42.777994 | orchestrator | 2026-01-28 01:04:42 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:45.807126 | orchestrator | 2026-01-28 01:04:45 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:45.807597 | orchestrator | 2026-01-28 01:04:45 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:45.808569 | orchestrator | 2026-01-28 01:04:45 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:45.810187 | orchestrator | 2026-01-28 01:04:45 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:04:45.810219 | orchestrator | 2026-01-28 01:04:45 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:04:45.810232 | orchestrator | 2026-01-28 01:04:45 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:48.841880 | orchestrator | 2026-01-28 01:04:48 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:48.842396 | orchestrator | 2026-01-28 01:04:48 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:48.845396 | orchestrator | 2026-01-28 01:04:48 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:48.846536 | orchestrator | 2026-01-28 01:04:48 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:04:48.847386 | orchestrator | 2026-01-28 01:04:48 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state STARTED 2026-01-28 01:04:48.847623 | orchestrator | 2026-01-28 01:04:48 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:51.875969 | orchestrator | 2026-01-28 01:04:51 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:51.877490 | orchestrator | 2026-01-28 01:04:51 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:51.877523 | orchestrator | 2026-01-28 01:04:51 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:04:51.877536 | orchestrator | 2026-01-28 01:04:51 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:51.878134 | orchestrator | 2026-01-28 01:04:51 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state STARTED 2026-01-28 01:04:51.879379 | orchestrator | 2026-01-28 01:04:51 | INFO  | Task 46cd44ab-ff03-456c-a7b6-fc323027b6a9 is in state SUCCESS 2026-01-28 01:04:51.881183 | orchestrator | 2026-01-28 01:04:51.881229 | orchestrator | 2026-01-28 01:04:51.881242 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 01:04:51.881254 | orchestrator | 2026-01-28 01:04:51.881266 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 01:04:51.881309 | orchestrator | Wednesday 28 January 2026 01:02:47 +0000 (0:00:00.235) 0:00:00.235 ***** 2026-01-28 01:04:51.881329 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:04:51.881348 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:04:51.881367 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:04:51.881387 | orchestrator | 2026-01-28 01:04:51.881406 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 01:04:51.881424 | orchestrator | Wednesday 28 January 2026 01:02:47 +0000 (0:00:00.285) 0:00:00.521 ***** 2026-01-28 01:04:51.881443 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-01-28 01:04:51.881463 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-01-28 01:04:51.881480 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-01-28 01:04:51.881491 | orchestrator | 2026-01-28 01:04:51.881502 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-01-28 01:04:51.881513 | orchestrator | 2026-01-28 01:04:51.881523 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-28 01:04:51.881534 | orchestrator | Wednesday 28 January 2026 01:02:47 +0000 (0:00:00.406) 0:00:00.927 ***** 2026-01-28 01:04:51.881545 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:04:51.881556 | orchestrator | 2026-01-28 01:04:51.881567 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-01-28 01:04:51.881578 | orchestrator | Wednesday 28 January 2026 01:02:48 +0000 (0:00:00.496) 0:00:01.423 ***** 2026-01-28 01:04:51.881589 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-01-28 01:04:51.881600 | orchestrator | 2026-01-28 01:04:51.881610 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-01-28 01:04:51.881621 | orchestrator | Wednesday 28 January 2026 01:02:51 +0000 (0:00:03.433) 0:00:04.857 ***** 2026-01-28 01:04:51.881632 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-01-28 01:04:51.881642 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-01-28 01:04:51.881653 | orchestrator | 2026-01-28 01:04:51.881664 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-01-28 01:04:51.881675 | orchestrator | Wednesday 28 January 2026 01:02:57 +0000 (0:00:05.862) 0:00:10.719 ***** 2026-01-28 01:04:51.881685 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-01-28 01:04:51.881696 | orchestrator | 2026-01-28 01:04:51.881707 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-01-28 01:04:51.881718 | orchestrator | Wednesday 28 January 2026 01:03:01 +0000 (0:00:03.559) 0:00:14.279 ***** 2026-01-28 01:04:51.881748 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-28 01:04:51.881760 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-01-28 01:04:51.881771 | orchestrator | 2026-01-28 01:04:51.881781 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-01-28 01:04:51.881792 | orchestrator | Wednesday 28 January 2026 01:03:05 +0000 (0:00:03.726) 0:00:18.006 ***** 2026-01-28 01:04:51.881803 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-28 01:04:51.881844 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-01-28 01:04:51.881857 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-01-28 01:04:51.881868 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-01-28 01:04:51.881878 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-01-28 01:04:51.881889 | orchestrator | 2026-01-28 01:04:51.881900 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-01-28 01:04:51.881911 | orchestrator | Wednesday 28 January 2026 01:03:21 +0000 (0:00:16.707) 0:00:34.713 ***** 2026-01-28 01:04:51.881921 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-01-28 01:04:51.881941 | orchestrator | 2026-01-28 01:04:51.881952 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-01-28 01:04:51.881963 | orchestrator | Wednesday 28 January 2026 01:03:25 +0000 (0:00:03.593) 0:00:38.307 ***** 2026-01-28 01:04:51.881977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-28 01:04:51.882006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-28 01:04:51.882065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.882110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.882135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.882190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.882253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-28 01:04:51.882275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.882294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.882312 | orchestrator | 2026-01-28 01:04:51.882333 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-01-28 01:04:51.882345 | orchestrator | Wednesday 28 January 2026 01:03:28 +0000 (0:00:02.991) 0:00:41.299 ***** 2026-01-28 01:04:51.882356 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-01-28 01:04:51.882367 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-01-28 01:04:51.882377 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-01-28 01:04:51.882388 | orchestrator | 2026-01-28 01:04:51.882399 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-01-28 01:04:51.882409 | orchestrator | Wednesday 28 January 2026 01:03:29 +0000 (0:00:01.344) 0:00:42.643 ***** 2026-01-28 01:04:51.882420 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:04:51.882431 | orchestrator | 2026-01-28 01:04:51.882447 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-01-28 01:04:51.882459 | orchestrator | Wednesday 28 January 2026 01:03:29 +0000 (0:00:00.110) 0:00:42.753 ***** 2026-01-28 01:04:51.882477 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:04:51.882488 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:04:51.882499 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:04:51.882510 | orchestrator | 2026-01-28 01:04:51.882520 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-28 01:04:51.882531 | orchestrator | Wednesday 28 January 2026 01:03:30 +0000 (0:00:00.614) 0:00:43.370 ***** 2026-01-28 01:04:51.882542 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:04:51.882552 | orchestrator | 2026-01-28 01:04:51.882563 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-01-28 01:04:51.882574 | orchestrator | Wednesday 28 January 2026 01:03:30 +0000 (0:00:00.527) 0:00:43.898 ***** 2026-01-28 01:04:51.882585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-28 01:04:51.882605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-28 01:04:51.882617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-28 01:04:51.882633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.882650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.882662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.882680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.882692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.882703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.882714 | orchestrator | 2026-01-28 01:04:51.882725 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-01-28 01:04:51.882736 | orchestrator | Wednesday 28 January 2026 01:03:34 +0000 (0:00:03.509) 0:00:47.407 ***** 2026-01-28 01:04:51.882752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-28 01:04:51.882769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-28 01:04:51.882781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:04:51.882792 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:04:51.882834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-28 01:04:51.882847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-28 01:04:51.882859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:04:51.882875 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:04:51.882891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-28 01:04:51.882903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-28 01:04:51.882914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:04:51.882925 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:04:51.882936 | orchestrator | 2026-01-28 01:04:51.882953 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-01-28 01:04:51.882964 | orchestrator | Wednesday 28 January 2026 01:03:36 +0000 (0:00:02.124) 0:00:49.532 ***** 2026-01-28 01:04:51.882975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-28 01:04:51.882992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-28 01:04:51.883008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:04:51.883019 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:04:51.883058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-28 01:04:51.883069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-28 01:04:51.883087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:04:51.883099 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:04:51.883110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-28 01:04:51.883136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-28 01:04:51.883148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:04:51.883159 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:04:51.883170 | orchestrator | 2026-01-28 01:04:51.883181 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-01-28 01:04:51.883192 | orchestrator | Wednesday 28 January 2026 01:03:38 +0000 (0:00:01.485) 0:00:51.017 ***** 2026-01-28 01:04:51.883203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-28 01:04:51.883220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-28 01:04:51.883239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-28 01:04:51.883254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.883266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.883277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.883295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.883307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.883323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.883334 | orchestrator | 2026-01-28 01:04:51.883346 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-01-28 01:04:51.883356 | orchestrator | Wednesday 28 January 2026 01:03:41 +0000 (0:00:03.746) 0:00:54.764 ***** 2026-01-28 01:04:51.883367 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:04:51.883378 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:04:51.883389 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:04:51.883400 | orchestrator | 2026-01-28 01:04:51.883411 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-01-28 01:04:51.883421 | orchestrator | Wednesday 28 January 2026 01:03:44 +0000 (0:00:02.990) 0:00:57.754 ***** 2026-01-28 01:04:51.883432 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-28 01:04:51.883443 | orchestrator | 2026-01-28 01:04:51.883453 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-01-28 01:04:51.883464 | orchestrator | Wednesday 28 January 2026 01:03:46 +0000 (0:00:01.937) 0:00:59.691 ***** 2026-01-28 01:04:51.883475 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:04:51.883490 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:04:51.883501 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:04:51.883512 | orchestrator | 2026-01-28 01:04:51.883523 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-01-28 01:04:51.883533 | orchestrator | Wednesday 28 January 2026 01:03:47 +0000 (0:00:01.120) 0:01:00.812 ***** 2026-01-28 01:04:51.883544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-28 01:04:51.883562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-28 01:04:51.883579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-28 01:04:51.883591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.883606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.883618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.883629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.883647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.883664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.883675 | orchestrator | 2026-01-28 01:04:51.883686 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-01-28 01:04:51.883697 | orchestrator | Wednesday 28 January 2026 01:03:57 +0000 (0:00:09.484) 0:01:10.296 ***** 2026-01-28 01:04:51.883708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-28 01:04:51.883724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-28 01:04:51.883736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:04:51.883747 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:04:51.883764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-28 01:04:51.883782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-28 01:04:51.883793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:04:51.883804 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:04:51.883855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-01-28 01:04:51.883867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-01-28 01:04:51.883879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:04:51.883899 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:04:51.883910 | orchestrator | 2026-01-28 01:04:51.883921 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-01-28 01:04:51.883932 | orchestrator | Wednesday 28 January 2026 01:03:58 +0000 (0:00:01.270) 0:01:11.567 ***** 2026-01-28 01:04:51.883951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-28 01:04:51.883966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-28 01:04:51.883994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-01-28 01:04:51.884016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.884044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.884064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.884076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.884087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.884103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:04:51.884114 | orchestrator | 2026-01-28 01:04:51.884125 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-01-28 01:04:51.884136 | orchestrator | Wednesday 28 January 2026 01:04:02 +0000 (0:00:03.825) 0:01:15.393 ***** 2026-01-28 01:04:51.884147 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:04:51.884158 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:04:51.884169 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:04:51.884179 | orchestrator | 2026-01-28 01:04:51.884190 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-01-28 01:04:51.884201 | orchestrator | Wednesday 28 January 2026 01:04:02 +0000 (0:00:00.259) 0:01:15.652 ***** 2026-01-28 01:04:51.884221 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:04:51.884232 | orchestrator | 2026-01-28 01:04:51.884243 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-01-28 01:04:51.884254 | orchestrator | Wednesday 28 January 2026 01:04:04 +0000 (0:00:02.065) 0:01:17.717 ***** 2026-01-28 01:04:51.884264 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:04:51.884275 | orchestrator | 2026-01-28 01:04:51.884286 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-01-28 01:04:51.884297 | orchestrator | Wednesday 28 January 2026 01:04:07 +0000 (0:00:02.652) 0:01:20.370 ***** 2026-01-28 01:04:51.884307 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:04:51.884318 | orchestrator | 2026-01-28 01:04:51.884329 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-28 01:04:51.884340 | orchestrator | Wednesday 28 January 2026 01:04:18 +0000 (0:00:10.886) 0:01:31.256 ***** 2026-01-28 01:04:51.884350 | orchestrator | 2026-01-28 01:04:51.884361 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-28 01:04:51.884371 | orchestrator | Wednesday 28 January 2026 01:04:18 +0000 (0:00:00.069) 0:01:31.325 ***** 2026-01-28 01:04:51.884382 | orchestrator | 2026-01-28 01:04:51.884393 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-01-28 01:04:51.884403 | orchestrator | Wednesday 28 January 2026 01:04:18 +0000 (0:00:00.063) 0:01:31.389 ***** 2026-01-28 01:04:51.884414 | orchestrator | 2026-01-28 01:04:51.884425 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-01-28 01:04:51.884436 | orchestrator | Wednesday 28 January 2026 01:04:18 +0000 (0:00:00.070) 0:01:31.459 ***** 2026-01-28 01:04:51.884446 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:04:51.884457 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:04:51.884468 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:04:51.884478 | orchestrator | 2026-01-28 01:04:51.884489 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-01-28 01:04:51.884500 | orchestrator | Wednesday 28 January 2026 01:04:25 +0000 (0:00:06.937) 0:01:38.397 ***** 2026-01-28 01:04:51.884510 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:04:51.884521 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:04:51.884537 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:04:51.884548 | orchestrator | 2026-01-28 01:04:51.884559 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-01-28 01:04:51.884570 | orchestrator | Wednesday 28 January 2026 01:04:36 +0000 (0:00:11.392) 0:01:49.789 ***** 2026-01-28 01:04:51.884581 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:04:51.884592 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:04:51.884603 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:04:51.884613 | orchestrator | 2026-01-28 01:04:51.884624 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:04:51.884636 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-28 01:04:51.884647 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-28 01:04:51.884658 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-28 01:04:51.884669 | orchestrator | 2026-01-28 01:04:51.884680 | orchestrator | 2026-01-28 01:04:51.884693 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:04:51.884712 | orchestrator | Wednesday 28 January 2026 01:04:48 +0000 (0:00:11.376) 0:02:01.166 ***** 2026-01-28 01:04:51.884732 | orchestrator | =============================================================================== 2026-01-28 01:04:51.884744 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.71s 2026-01-28 01:04:51.884755 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.39s 2026-01-28 01:04:51.884772 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.38s 2026-01-28 01:04:51.884783 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.89s 2026-01-28 01:04:51.884793 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.48s 2026-01-28 01:04:51.884804 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.94s 2026-01-28 01:04:51.884836 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 5.86s 2026-01-28 01:04:51.884847 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.83s 2026-01-28 01:04:51.884858 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.75s 2026-01-28 01:04:51.884869 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.73s 2026-01-28 01:04:51.884880 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.59s 2026-01-28 01:04:51.884890 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.56s 2026-01-28 01:04:51.884906 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.51s 2026-01-28 01:04:51.884918 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.43s 2026-01-28 01:04:51.884928 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.99s 2026-01-28 01:04:51.884939 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.99s 2026-01-28 01:04:51.884950 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.65s 2026-01-28 01:04:51.884960 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.12s 2026-01-28 01:04:51.884971 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.07s 2026-01-28 01:04:51.884982 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.94s 2026-01-28 01:04:51.884993 | orchestrator | 2026-01-28 01:04:51 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:54.906184 | orchestrator | 2026-01-28 01:04:54 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:54.906278 | orchestrator | 2026-01-28 01:04:54 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:54.907800 | orchestrator | 2026-01-28 01:04:54 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:04:54.908502 | orchestrator | 2026-01-28 01:04:54 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:54.909268 | orchestrator | 2026-01-28 01:04:54 | INFO  | Task 4c02aad8-cc3c-4c08-9ae0-f33fd5c5f54d is in state SUCCESS 2026-01-28 01:04:54.909292 | orchestrator | 2026-01-28 01:04:54 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:04:57.941858 | orchestrator | 2026-01-28 01:04:57 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:04:57.941946 | orchestrator | 2026-01-28 01:04:57 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:04:57.941957 | orchestrator | 2026-01-28 01:04:57 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:04:57.941967 | orchestrator | 2026-01-28 01:04:57 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:04:57.941975 | orchestrator | 2026-01-28 01:04:57 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:00.978169 | orchestrator | 2026-01-28 01:05:00 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:05:00.978486 | orchestrator | 2026-01-28 01:05:00 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:05:00.979146 | orchestrator | 2026-01-28 01:05:00 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:00.979756 | orchestrator | 2026-01-28 01:05:00 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:00.979782 | orchestrator | 2026-01-28 01:05:00 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:04.008310 | orchestrator | 2026-01-28 01:05:04 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:05:04.008622 | orchestrator | 2026-01-28 01:05:04 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:05:04.009260 | orchestrator | 2026-01-28 01:05:04 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:04.010605 | orchestrator | 2026-01-28 01:05:04 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:04.010651 | orchestrator | 2026-01-28 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:07.050657 | orchestrator | 2026-01-28 01:05:07 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:05:07.050983 | orchestrator | 2026-01-28 01:05:07 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:05:07.051938 | orchestrator | 2026-01-28 01:05:07 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:07.053219 | orchestrator | 2026-01-28 01:05:07 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:07.053303 | orchestrator | 2026-01-28 01:05:07 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:10.100525 | orchestrator | 2026-01-28 01:05:10 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state STARTED 2026-01-28 01:05:10.102190 | orchestrator | 2026-01-28 01:05:10 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:05:10.104071 | orchestrator | 2026-01-28 01:05:10 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:10.106169 | orchestrator | 2026-01-28 01:05:10 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:10.106766 | orchestrator | 2026-01-28 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:13.152254 | orchestrator | 2026-01-28 01:05:13 | INFO  | Task e90fd1bc-2244-4488-94c1-df56c50ef9d5 is in state SUCCESS 2026-01-28 01:05:13.153429 | orchestrator | 2026-01-28 01:05:13.153464 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-01-28 01:05:13.153477 | orchestrator | 2.16.14 2026-01-28 01:05:13.153490 | orchestrator | 2026-01-28 01:05:13.153501 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-01-28 01:05:13.153512 | orchestrator | 2026-01-28 01:05:13.153524 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-01-28 01:05:13.153535 | orchestrator | Wednesday 28 January 2026 01:03:18 +0000 (0:00:00.254) 0:00:00.254 ***** 2026-01-28 01:05:13.153546 | orchestrator | changed: [testbed-manager] 2026-01-28 01:05:13.153557 | orchestrator | 2026-01-28 01:05:13.153568 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-01-28 01:05:13.153579 | orchestrator | Wednesday 28 January 2026 01:03:19 +0000 (0:00:01.263) 0:00:01.518 ***** 2026-01-28 01:05:13.153591 | orchestrator | changed: [testbed-manager] 2026-01-28 01:05:13.153601 | orchestrator | 2026-01-28 01:05:13.153613 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-01-28 01:05:13.153623 | orchestrator | Wednesday 28 January 2026 01:03:20 +0000 (0:00:01.046) 0:00:02.564 ***** 2026-01-28 01:05:13.153634 | orchestrator | changed: [testbed-manager] 2026-01-28 01:05:13.153645 | orchestrator | 2026-01-28 01:05:13.153656 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-01-28 01:05:13.153688 | orchestrator | Wednesday 28 January 2026 01:03:22 +0000 (0:00:01.377) 0:00:03.942 ***** 2026-01-28 01:05:13.153699 | orchestrator | changed: [testbed-manager] 2026-01-28 01:05:13.153710 | orchestrator | 2026-01-28 01:05:13.153725 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-01-28 01:05:13.153745 | orchestrator | Wednesday 28 January 2026 01:03:23 +0000 (0:00:01.336) 0:00:05.279 ***** 2026-01-28 01:05:13.153766 | orchestrator | changed: [testbed-manager] 2026-01-28 01:05:13.153787 | orchestrator | 2026-01-28 01:05:13.153863 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-01-28 01:05:13.153888 | orchestrator | Wednesday 28 January 2026 01:03:24 +0000 (0:00:01.056) 0:00:06.335 ***** 2026-01-28 01:05:13.153905 | orchestrator | changed: [testbed-manager] 2026-01-28 01:05:13.153923 | orchestrator | 2026-01-28 01:05:13.153940 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-01-28 01:05:13.153959 | orchestrator | Wednesday 28 January 2026 01:03:26 +0000 (0:00:01.831) 0:00:08.167 ***** 2026-01-28 01:05:13.153977 | orchestrator | changed: [testbed-manager] 2026-01-28 01:05:13.153988 | orchestrator | 2026-01-28 01:05:13.153999 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-01-28 01:05:13.154010 | orchestrator | Wednesday 28 January 2026 01:03:28 +0000 (0:00:01.870) 0:00:10.037 ***** 2026-01-28 01:05:13.154075 | orchestrator | changed: [testbed-manager] 2026-01-28 01:05:13.154088 | orchestrator | 2026-01-28 01:05:13.154100 | orchestrator | TASK [Create admin user] ******************************************************* 2026-01-28 01:05:13.154112 | orchestrator | Wednesday 28 January 2026 01:03:29 +0000 (0:00:01.081) 0:00:11.118 ***** 2026-01-28 01:05:13.154126 | orchestrator | changed: [testbed-manager] 2026-01-28 01:05:13.154139 | orchestrator | 2026-01-28 01:05:13.154151 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-01-28 01:05:13.154163 | orchestrator | Wednesday 28 January 2026 01:04:27 +0000 (0:00:58.085) 0:01:09.204 ***** 2026-01-28 01:05:13.154176 | orchestrator | skipping: [testbed-manager] 2026-01-28 01:05:13.154188 | orchestrator | 2026-01-28 01:05:13.154200 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-28 01:05:13.154212 | orchestrator | 2026-01-28 01:05:13.154224 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-28 01:05:13.154236 | orchestrator | Wednesday 28 January 2026 01:04:27 +0000 (0:00:00.150) 0:01:09.354 ***** 2026-01-28 01:05:13.154249 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:05:13.154261 | orchestrator | 2026-01-28 01:05:13.154273 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-28 01:05:13.154285 | orchestrator | 2026-01-28 01:05:13.154298 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-28 01:05:13.154310 | orchestrator | Wednesday 28 January 2026 01:04:29 +0000 (0:00:01.975) 0:01:11.330 ***** 2026-01-28 01:05:13.154322 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:05:13.154335 | orchestrator | 2026-01-28 01:05:13.154348 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-01-28 01:05:13.154360 | orchestrator | 2026-01-28 01:05:13.154372 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-01-28 01:05:13.154385 | orchestrator | Wednesday 28 January 2026 01:04:40 +0000 (0:00:11.256) 0:01:22.586 ***** 2026-01-28 01:05:13.154398 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:05:13.154408 | orchestrator | 2026-01-28 01:05:13.154420 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:05:13.154431 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-01-28 01:05:13.154443 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:05:13.154454 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:05:13.154477 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:05:13.154488 | orchestrator | 2026-01-28 01:05:13.154499 | orchestrator | 2026-01-28 01:05:13.154510 | orchestrator | 2026-01-28 01:05:13.154535 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:05:13.154547 | orchestrator | Wednesday 28 January 2026 01:04:51 +0000 (0:00:10.982) 0:01:33.568 ***** 2026-01-28 01:05:13.154558 | orchestrator | =============================================================================== 2026-01-28 01:05:13.154569 | orchestrator | Create admin user ------------------------------------------------------ 58.09s 2026-01-28 01:05:13.154595 | orchestrator | Restart ceph manager service ------------------------------------------- 24.21s 2026-01-28 01:05:13.154606 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.87s 2026-01-28 01:05:13.154617 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.83s 2026-01-28 01:05:13.154628 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.38s 2026-01-28 01:05:13.154649 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.34s 2026-01-28 01:05:13.154660 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.26s 2026-01-28 01:05:13.154671 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.08s 2026-01-28 01:05:13.154681 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.06s 2026-01-28 01:05:13.154692 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.05s 2026-01-28 01:05:13.154703 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2026-01-28 01:05:13.154714 | orchestrator | 2026-01-28 01:05:13.154724 | orchestrator | 2026-01-28 01:05:13.154735 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 01:05:13.154746 | orchestrator | 2026-01-28 01:05:13.154757 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 01:05:13.154858 | orchestrator | Wednesday 28 January 2026 01:03:57 +0000 (0:00:00.192) 0:00:00.192 ***** 2026-01-28 01:05:13.154879 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:05:13.154900 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:05:13.154921 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:05:13.154942 | orchestrator | 2026-01-28 01:05:13.154958 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 01:05:13.154976 | orchestrator | Wednesday 28 January 2026 01:03:57 +0000 (0:00:00.280) 0:00:00.473 ***** 2026-01-28 01:05:13.154994 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-01-28 01:05:13.155013 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-01-28 01:05:13.155031 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-01-28 01:05:13.155048 | orchestrator | 2026-01-28 01:05:13.155065 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-01-28 01:05:13.155083 | orchestrator | 2026-01-28 01:05:13.155101 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-28 01:05:13.155118 | orchestrator | Wednesday 28 January 2026 01:03:58 +0000 (0:00:00.845) 0:00:01.318 ***** 2026-01-28 01:05:13.155142 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:05:13.155167 | orchestrator | 2026-01-28 01:05:13.155187 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-01-28 01:05:13.155206 | orchestrator | Wednesday 28 January 2026 01:03:59 +0000 (0:00:00.999) 0:00:02.318 ***** 2026-01-28 01:05:13.155234 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-01-28 01:05:13.155254 | orchestrator | 2026-01-28 01:05:13.155272 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-01-28 01:05:13.155299 | orchestrator | Wednesday 28 January 2026 01:04:03 +0000 (0:00:04.086) 0:00:06.405 ***** 2026-01-28 01:05:13.155337 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-01-28 01:05:13.155358 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-01-28 01:05:13.155377 | orchestrator | 2026-01-28 01:05:13.155396 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-01-28 01:05:13.155414 | orchestrator | Wednesday 28 January 2026 01:04:10 +0000 (0:00:06.764) 0:00:13.169 ***** 2026-01-28 01:05:13.155432 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-28 01:05:13.155451 | orchestrator | 2026-01-28 01:05:13.155469 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-01-28 01:05:13.155488 | orchestrator | Wednesday 28 January 2026 01:04:13 +0000 (0:00:03.443) 0:00:16.613 ***** 2026-01-28 01:05:13.155508 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-28 01:05:13.155527 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-01-28 01:05:13.155546 | orchestrator | 2026-01-28 01:05:13.155564 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-01-28 01:05:13.155583 | orchestrator | Wednesday 28 January 2026 01:04:17 +0000 (0:00:03.749) 0:00:20.363 ***** 2026-01-28 01:05:13.155603 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-28 01:05:13.155631 | orchestrator | 2026-01-28 01:05:13.155652 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-01-28 01:05:13.155669 | orchestrator | Wednesday 28 January 2026 01:04:20 +0000 (0:00:03.395) 0:00:23.758 ***** 2026-01-28 01:05:13.155691 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-01-28 01:05:13.155717 | orchestrator | 2026-01-28 01:05:13.155736 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-28 01:05:13.155754 | orchestrator | Wednesday 28 January 2026 01:04:24 +0000 (0:00:04.069) 0:00:27.827 ***** 2026-01-28 01:05:13.155773 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:05:13.155791 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:05:13.155865 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:05:13.155885 | orchestrator | 2026-01-28 01:05:13.155904 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-01-28 01:05:13.155932 | orchestrator | Wednesday 28 January 2026 01:04:25 +0000 (0:00:00.529) 0:00:28.357 ***** 2026-01-28 01:05:13.155972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-28 01:05:13.155997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-28 01:05:13.156029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-28 01:05:13.156049 | orchestrator | 2026-01-28 01:05:13.156068 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-01-28 01:05:13.156087 | orchestrator | Wednesday 28 January 2026 01:04:26 +0000 (0:00:01.349) 0:00:29.706 ***** 2026-01-28 01:05:13.156106 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:05:13.156124 | orchestrator | 2026-01-28 01:05:13.156142 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-01-28 01:05:13.156160 | orchestrator | Wednesday 28 January 2026 01:04:27 +0000 (0:00:00.450) 0:00:30.157 ***** 2026-01-28 01:05:13.156179 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:05:13.156198 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:05:13.156216 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:05:13.156234 | orchestrator | 2026-01-28 01:05:13.156252 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-01-28 01:05:13.156271 | orchestrator | Wednesday 28 January 2026 01:04:28 +0000 (0:00:01.707) 0:00:31.864 ***** 2026-01-28 01:05:13.156290 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:05:13.156308 | orchestrator | 2026-01-28 01:05:13.156327 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-01-28 01:05:13.156346 | orchestrator | Wednesday 28 January 2026 01:04:29 +0000 (0:00:00.645) 0:00:32.510 ***** 2026-01-28 01:05:13.156382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-28 01:05:13.156403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-28 01:05:13.156433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-28 01:05:13.156452 | orchestrator | 2026-01-28 01:05:13.156471 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-01-28 01:05:13.156488 | orchestrator | Wednesday 28 January 2026 01:04:31 +0000 (0:00:01.893) 0:00:34.403 ***** 2026-01-28 01:05:13.156508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-28 01:05:13.156528 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:05:13.156560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-28 01:05:13.156580 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:05:13.156599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-28 01:05:13.156628 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:05:13.156647 | orchestrator | 2026-01-28 01:05:13.156666 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-01-28 01:05:13.156684 | orchestrator | Wednesday 28 January 2026 01:04:32 +0000 (0:00:01.091) 0:00:35.495 ***** 2026-01-28 01:05:13.156702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-28 01:05:13.156719 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:05:13.156736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-28 01:05:13.156755 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:05:13.156781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-28 01:05:13.156858 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:05:13.156882 | orchestrator | 2026-01-28 01:05:13.156912 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-01-28 01:05:13.156932 | orchestrator | Wednesday 28 January 2026 01:04:33 +0000 (0:00:00.922) 0:00:36.418 ***** 2026-01-28 01:05:13.156952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-28 01:05:13.156986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-28 01:05:13.157008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-28 01:05:13.157026 | orchestrator | 2026-01-28 01:05:13.157045 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-01-28 01:05:13.157064 | orchestrator | Wednesday 28 January 2026 01:04:34 +0000 (0:00:01.404) 0:00:37.823 ***** 2026-01-28 01:05:13.157089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-28 01:05:13.157118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-28 01:05:13.157150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-28 01:05:13.157171 | orchestrator | 2026-01-28 01:05:13.157188 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-01-28 01:05:13.157200 | orchestrator | Wednesday 28 January 2026 01:04:38 +0000 (0:00:04.112) 0:00:41.936 ***** 2026-01-28 01:05:13.157211 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-28 01:05:13.157222 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-28 01:05:13.157233 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-01-28 01:05:13.157243 | orchestrator | 2026-01-28 01:05:13.157254 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-01-28 01:05:13.157265 | orchestrator | Wednesday 28 January 2026 01:04:40 +0000 (0:00:01.903) 0:00:43.840 ***** 2026-01-28 01:05:13.157276 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:05:13.157287 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:05:13.157298 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:05:13.157308 | orchestrator | 2026-01-28 01:05:13.157320 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-01-28 01:05:13.157330 | orchestrator | Wednesday 28 January 2026 01:04:42 +0000 (0:00:01.886) 0:00:45.727 ***** 2026-01-28 01:05:13.157342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-28 01:05:13.157371 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:05:13.157399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-28 01:05:13.157417 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:05:13.157434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-01-28 01:05:13.157451 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:05:13.157468 | orchestrator | 2026-01-28 01:05:13.157485 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-01-28 01:05:13.157502 | orchestrator | Wednesday 28 January 2026 01:04:43 +0000 (0:00:00.490) 0:00:46.217 ***** 2026-01-28 01:05:13.157518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-28 01:05:13.157536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-28 01:05:13.157583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-01-28 01:05:13.157611 | orchestrator | 2026-01-28 01:05:13.157630 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-01-28 01:05:13.157647 | orchestrator | Wednesday 28 January 2026 01:04:44 +0000 (0:00:01.066) 0:00:47.284 ***** 2026-01-28 01:05:13.157662 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:05:13.157675 | orchestrator | 2026-01-28 01:05:13.157692 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-01-28 01:05:13.157708 | orchestrator | Wednesday 28 January 2026 01:04:47 +0000 (0:00:02.867) 0:00:50.152 ***** 2026-01-28 01:05:13.157723 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:05:13.157739 | orchestrator | 2026-01-28 01:05:13.157756 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-01-28 01:05:13.157773 | orchestrator | Wednesday 28 January 2026 01:04:49 +0000 (0:00:02.786) 0:00:52.938 ***** 2026-01-28 01:05:13.157790 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:05:13.157831 | orchestrator | 2026-01-28 01:05:13.157849 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-28 01:05:13.157865 | orchestrator | Wednesday 28 January 2026 01:05:05 +0000 (0:00:15.126) 0:01:08.064 ***** 2026-01-28 01:05:13.157882 | orchestrator | 2026-01-28 01:05:13.157898 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-28 01:05:13.157915 | orchestrator | Wednesday 28 January 2026 01:05:05 +0000 (0:00:00.061) 0:01:08.125 ***** 2026-01-28 01:05:13.157931 | orchestrator | 2026-01-28 01:05:13.157948 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-01-28 01:05:13.157965 | orchestrator | Wednesday 28 January 2026 01:05:05 +0000 (0:00:00.057) 0:01:08.183 ***** 2026-01-28 01:05:13.157982 | orchestrator | 2026-01-28 01:05:13.157998 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-01-28 01:05:13.158048 | orchestrator | Wednesday 28 January 2026 01:05:05 +0000 (0:00:00.061) 0:01:08.245 ***** 2026-01-28 01:05:13.158069 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:05:13.158086 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:05:13.158104 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:05:13.158121 | orchestrator | 2026-01-28 01:05:13.158138 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:05:13.158156 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-28 01:05:13.158173 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-28 01:05:13.158190 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-28 01:05:13.158207 | orchestrator | 2026-01-28 01:05:13.158224 | orchestrator | 2026-01-28 01:05:13.158241 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:05:13.158272 | orchestrator | Wednesday 28 January 2026 01:05:11 +0000 (0:00:06.102) 0:01:14.348 ***** 2026-01-28 01:05:13.158289 | orchestrator | =============================================================================== 2026-01-28 01:05:13.158305 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.13s 2026-01-28 01:05:13.158323 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.76s 2026-01-28 01:05:13.158340 | orchestrator | placement : Restart placement-api container ----------------------------- 6.10s 2026-01-28 01:05:13.158355 | orchestrator | placement : Copying over placement.conf --------------------------------- 4.11s 2026-01-28 01:05:13.158365 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.09s 2026-01-28 01:05:13.158374 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.07s 2026-01-28 01:05:13.158384 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.75s 2026-01-28 01:05:13.158393 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.44s 2026-01-28 01:05:13.158403 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.40s 2026-01-28 01:05:13.158413 | orchestrator | placement : Creating placement databases -------------------------------- 2.87s 2026-01-28 01:05:13.158422 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.79s 2026-01-28 01:05:13.158431 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.90s 2026-01-28 01:05:13.158441 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.89s 2026-01-28 01:05:13.158450 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.89s 2026-01-28 01:05:13.158460 | orchestrator | placement : Set placement policy file ----------------------------------- 1.71s 2026-01-28 01:05:13.158469 | orchestrator | placement : Copying over config.json files for services ----------------- 1.40s 2026-01-28 01:05:13.158479 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.35s 2026-01-28 01:05:13.158494 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.09s 2026-01-28 01:05:13.158504 | orchestrator | placement : Check placement containers ---------------------------------- 1.07s 2026-01-28 01:05:13.158513 | orchestrator | placement : include_tasks ----------------------------------------------- 1.00s 2026-01-28 01:05:13.158523 | orchestrator | 2026-01-28 01:05:13 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:05:13.158590 | orchestrator | 2026-01-28 01:05:13 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:13.159183 | orchestrator | 2026-01-28 01:05:13 | INFO  | Task 5930d1e3-c6b0-42ce-a31f-2ec133f6828a is in state STARTED 2026-01-28 01:05:13.161200 | orchestrator | 2026-01-28 01:05:13 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:13.161410 | orchestrator | 2026-01-28 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:16.209027 | orchestrator | 2026-01-28 01:05:16 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:05:16.209117 | orchestrator | 2026-01-28 01:05:16 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:16.210791 | orchestrator | 2026-01-28 01:05:16 | INFO  | Task 5930d1e3-c6b0-42ce-a31f-2ec133f6828a is in state STARTED 2026-01-28 01:05:16.213111 | orchestrator | 2026-01-28 01:05:16 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:16.213229 | orchestrator | 2026-01-28 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:19.238378 | orchestrator | 2026-01-28 01:05:19 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:05:19.238768 | orchestrator | 2026-01-28 01:05:19 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:19.239506 | orchestrator | 2026-01-28 01:05:19 | INFO  | Task 5930d1e3-c6b0-42ce-a31f-2ec133f6828a is in state SUCCESS 2026-01-28 01:05:19.240475 | orchestrator | 2026-01-28 01:05:19 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:19.240917 | orchestrator | 2026-01-28 01:05:19 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:05:19.240996 | orchestrator | 2026-01-28 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:22.290181 | orchestrator | 2026-01-28 01:05:22 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:05:22.290937 | orchestrator | 2026-01-28 01:05:22 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:22.292972 | orchestrator | 2026-01-28 01:05:22 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:22.295737 | orchestrator | 2026-01-28 01:05:22 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:05:22.295795 | orchestrator | 2026-01-28 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:25.349067 | orchestrator | 2026-01-28 01:05:25 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:05:25.349247 | orchestrator | 2026-01-28 01:05:25 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:25.349271 | orchestrator | 2026-01-28 01:05:25 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:25.349513 | orchestrator | 2026-01-28 01:05:25 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:05:25.349539 | orchestrator | 2026-01-28 01:05:25 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:28.379036 | orchestrator | 2026-01-28 01:05:28 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:05:28.379585 | orchestrator | 2026-01-28 01:05:28 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:28.379971 | orchestrator | 2026-01-28 01:05:28 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:28.381108 | orchestrator | 2026-01-28 01:05:28 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:05:28.381352 | orchestrator | 2026-01-28 01:05:28 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:31.411644 | orchestrator | 2026-01-28 01:05:31 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:05:31.413216 | orchestrator | 2026-01-28 01:05:31 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:31.415121 | orchestrator | 2026-01-28 01:05:31 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:31.416618 | orchestrator | 2026-01-28 01:05:31 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:05:31.417259 | orchestrator | 2026-01-28 01:05:31 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:34.460538 | orchestrator | 2026-01-28 01:05:34 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:05:34.463077 | orchestrator | 2026-01-28 01:05:34 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:34.466601 | orchestrator | 2026-01-28 01:05:34 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:34.468492 | orchestrator | 2026-01-28 01:05:34 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:05:34.468983 | orchestrator | 2026-01-28 01:05:34 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:37.504342 | orchestrator | 2026-01-28 01:05:37 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state STARTED 2026-01-28 01:05:37.506838 | orchestrator | 2026-01-28 01:05:37 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:37.507392 | orchestrator | 2026-01-28 01:05:37 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:37.508290 | orchestrator | 2026-01-28 01:05:37 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:05:37.508316 | orchestrator | 2026-01-28 01:05:37 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:40.537687 | orchestrator | 2026-01-28 01:05:40 | INFO  | Task 9bc29449-abcc-4393-8c6c-3354c2ac94ec is in state SUCCESS 2026-01-28 01:05:40.538562 | orchestrator | 2026-01-28 01:05:40.538593 | orchestrator | 2026-01-28 01:05:40.538601 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 01:05:40.538608 | orchestrator | 2026-01-28 01:05:40.538614 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 01:05:40.538621 | orchestrator | Wednesday 28 January 2026 01:05:15 +0000 (0:00:00.166) 0:00:00.166 ***** 2026-01-28 01:05:40.538627 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:05:40.538634 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:05:40.538640 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:05:40.538646 | orchestrator | 2026-01-28 01:05:40.538652 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 01:05:40.538658 | orchestrator | Wednesday 28 January 2026 01:05:15 +0000 (0:00:00.317) 0:00:00.484 ***** 2026-01-28 01:05:40.538664 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-01-28 01:05:40.538671 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-01-28 01:05:40.538677 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-01-28 01:05:40.538683 | orchestrator | 2026-01-28 01:05:40.538689 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-01-28 01:05:40.538695 | orchestrator | 2026-01-28 01:05:40.538701 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-01-28 01:05:40.538707 | orchestrator | Wednesday 28 January 2026 01:05:16 +0000 (0:00:00.659) 0:00:01.143 ***** 2026-01-28 01:05:40.538712 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:05:40.538718 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:05:40.538724 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:05:40.538730 | orchestrator | 2026-01-28 01:05:40.538736 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:05:40.538743 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:05:40.538750 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:05:40.538756 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:05:40.538762 | orchestrator | 2026-01-28 01:05:40.538768 | orchestrator | 2026-01-28 01:05:40.538774 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:05:40.538780 | orchestrator | Wednesday 28 January 2026 01:05:16 +0000 (0:00:00.673) 0:00:01.817 ***** 2026-01-28 01:05:40.538786 | orchestrator | =============================================================================== 2026-01-28 01:05:40.538922 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.67s 2026-01-28 01:05:40.538931 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2026-01-28 01:05:40.538937 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-01-28 01:05:40.538962 | orchestrator | 2026-01-28 01:05:40.538968 | orchestrator | 2026-01-28 01:05:40.538974 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 01:05:40.539012 | orchestrator | 2026-01-28 01:05:40.539019 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 01:05:40.539025 | orchestrator | Wednesday 28 January 2026 01:02:47 +0000 (0:00:00.266) 0:00:00.266 ***** 2026-01-28 01:05:40.539031 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:05:40.539037 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:05:40.539043 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:05:40.539049 | orchestrator | 2026-01-28 01:05:40.539054 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 01:05:40.539060 | orchestrator | Wednesday 28 January 2026 01:02:48 +0000 (0:00:00.270) 0:00:00.536 ***** 2026-01-28 01:05:40.539078 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-01-28 01:05:40.539084 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-01-28 01:05:40.539091 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-01-28 01:05:40.539097 | orchestrator | 2026-01-28 01:05:40.539102 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-01-28 01:05:40.539108 | orchestrator | 2026-01-28 01:05:40.539114 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-28 01:05:40.539120 | orchestrator | Wednesday 28 January 2026 01:02:48 +0000 (0:00:00.339) 0:00:00.876 ***** 2026-01-28 01:05:40.539126 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:05:40.539132 | orchestrator | 2026-01-28 01:05:40.539138 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-01-28 01:05:40.539144 | orchestrator | Wednesday 28 January 2026 01:02:48 +0000 (0:00:00.468) 0:00:01.344 ***** 2026-01-28 01:05:40.539149 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-01-28 01:05:40.539155 | orchestrator | 2026-01-28 01:05:40.539161 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-01-28 01:05:40.539167 | orchestrator | Wednesday 28 January 2026 01:02:52 +0000 (0:00:03.506) 0:00:04.850 ***** 2026-01-28 01:05:40.539173 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-01-28 01:05:40.539180 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-01-28 01:05:40.539187 | orchestrator | 2026-01-28 01:05:40.539193 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-01-28 01:05:40.539200 | orchestrator | Wednesday 28 January 2026 01:02:58 +0000 (0:00:05.974) 0:00:10.825 ***** 2026-01-28 01:05:40.539215 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-28 01:05:40.539222 | orchestrator | 2026-01-28 01:05:40.539229 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-01-28 01:05:40.539236 | orchestrator | Wednesday 28 January 2026 01:03:01 +0000 (0:00:03.220) 0:00:14.045 ***** 2026-01-28 01:05:40.539252 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-28 01:05:40.539259 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-01-28 01:05:40.539274 | orchestrator | 2026-01-28 01:05:40.539281 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-01-28 01:05:40.539287 | orchestrator | Wednesday 28 January 2026 01:03:05 +0000 (0:00:04.072) 0:00:18.118 ***** 2026-01-28 01:05:40.539294 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-28 01:05:40.539301 | orchestrator | 2026-01-28 01:05:40.539308 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-01-28 01:05:40.539332 | orchestrator | Wednesday 28 January 2026 01:03:08 +0000 (0:00:03.094) 0:00:21.212 ***** 2026-01-28 01:05:40.539340 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-01-28 01:05:40.539346 | orchestrator | 2026-01-28 01:05:40.539353 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-01-28 01:05:40.539365 | orchestrator | Wednesday 28 January 2026 01:03:12 +0000 (0:00:03.709) 0:00:24.922 ***** 2026-01-28 01:05:40.539375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-28 01:05:40.539386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-28 01:05:40.539437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-28 01:05:40.539447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.539461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.539474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.539482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.539490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.539500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.539513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.539527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.539535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.539584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.539668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.539680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.539686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.539867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.539902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.539959 | orchestrator | 2026-01-28 01:05:40.539967 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-01-28 01:05:40.539973 | orchestrator | Wednesday 28 January 2026 01:03:15 +0000 (0:00:02.823) 0:00:27.745 ***** 2026-01-28 01:05:40.539979 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:05:40.540062 | orchestrator | 2026-01-28 01:05:40.540069 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-01-28 01:05:40.540075 | orchestrator | Wednesday 28 January 2026 01:03:15 +0000 (0:00:00.136) 0:00:27.882 ***** 2026-01-28 01:05:40.540081 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:05:40.540087 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:05:40.540093 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:05:40.540098 | orchestrator | 2026-01-28 01:05:40.540104 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-28 01:05:40.540110 | orchestrator | Wednesday 28 January 2026 01:03:15 +0000 (0:00:00.283) 0:00:28.165 ***** 2026-01-28 01:05:40.540116 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:05:40.540121 | orchestrator | 2026-01-28 01:05:40.540127 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-01-28 01:05:40.540133 | orchestrator | Wednesday 28 January 2026 01:03:16 +0000 (0:00:00.754) 0:00:28.919 ***** 2026-01-28 01:05:40.540139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-28 01:05:40.540149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-28 01:05:40.540156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-28 01:05:40.540176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.540183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.540189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.540195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.540204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.540211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.540226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.540232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.540238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.540244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.540253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.540260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.540273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.540286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.540292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.540298 | orchestrator | 2026-01-28 01:05:40.540304 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-01-28 01:05:40.540310 | orchestrator | Wednesday 28 January 2026 01:03:22 +0000 (0:00:06.190) 0:00:35.110 ***** 2026-01-28 01:05:40.540355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-28 01:05:40.540365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-28 01:05:40.540371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.540382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541301 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:05:40.541309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-28 01:05:40.541322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-28 01:05:40.541329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541369 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:05:40.541376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-28 01:05:40.541382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-28 01:05:40.541391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541424 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:05:40.541430 | orchestrator | 2026-01-28 01:05:40.541436 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-01-28 01:05:40.541443 | orchestrator | Wednesday 28 January 2026 01:03:23 +0000 (0:00:01.262) 0:00:36.372 ***** 2026-01-28 01:05:40.541449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-28 01:05:40.541458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-28 01:05:40.541470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541499 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:05:40.541505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-28 01:05:40.541511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-28 01:05:40.541524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541574 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:05:40.541581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-28 01:05:40.541587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-28 01:05:40.541600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.541629 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:05:40.541635 | orchestrator | 2026-01-28 01:05:40.541641 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-01-28 01:05:40.541647 | orchestrator | Wednesday 28 January 2026 01:03:25 +0000 (0:00:01.440) 0:00:37.813 ***** 2026-01-28 01:05:40.541692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-28 01:05:40.541707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-28 01:05:40.541714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-28 01:05:40.541723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541889 | orchestrator | 2026-01-28 01:05:40.541896 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-01-28 01:05:40.541903 | orchestrator | Wednesday 28 January 2026 01:03:32 +0000 (0:00:06.905) 0:00:44.719 ***** 2026-01-28 01:05:40.541910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-28 01:05:40.541931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-28 01:05:40.541938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-28 01:05:40.541950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.541993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542346 | orchestrator | 2026-01-28 01:05:40.542355 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-01-28 01:05:40.542361 | orchestrator | Wednesday 28 January 2026 01:03:52 +0000 (0:00:20.677) 0:01:05.396 ***** 2026-01-28 01:05:40.542366 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-28 01:05:40.542372 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-28 01:05:40.542377 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-01-28 01:05:40.542383 | orchestrator | 2026-01-28 01:05:40.542388 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-01-28 01:05:40.542413 | orchestrator | Wednesday 28 January 2026 01:03:58 +0000 (0:00:05.778) 0:01:11.174 ***** 2026-01-28 01:05:40.542419 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-28 01:05:40.542424 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-28 01:05:40.542430 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-01-28 01:05:40.542435 | orchestrator | 2026-01-28 01:05:40.542440 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-01-28 01:05:40.542446 | orchestrator | Wednesday 28 January 2026 01:04:03 +0000 (0:00:04.306) 0:01:15.481 ***** 2026-01-28 01:05:40.542451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-28 01:05:40.542460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-28 01:05:40.542470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-28 01:05:40.542481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542590 | orchestrator | 2026-01-28 01:05:40.542596 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-01-28 01:05:40.542601 | orchestrator | Wednesday 28 January 2026 01:04:06 +0000 (0:00:03.484) 0:01:18.965 ***** 2026-01-28 01:05:40.542607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-28 01:05:40.542615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-28 01:05:40.542621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-28 01:05:40.542634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.542761 | orchestrator | 2026-01-28 01:05:40.542767 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-28 01:05:40.542772 | orchestrator | Wednesday 28 January 2026 01:04:09 +0000 (0:00:03.277) 0:01:22.242 ***** 2026-01-28 01:05:40.542778 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:05:40.542783 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:05:40.542788 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:05:40.542832 | orchestrator | 2026-01-28 01:05:40.542839 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-01-28 01:05:40.542844 | orchestrator | Wednesday 28 January 2026 01:04:10 +0000 (0:00:01.090) 0:01:23.333 ***** 2026-01-28 01:05:40.542850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-28 01:05:40.542858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-28 01:05:40.542868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542895 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:05:40.542900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-28 01:05:40.542909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-28 01:05:40.542922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.542950 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:05:40.542955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-01-28 01:05:40.542963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-01-28 01:05:40.542994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.543003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.543008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.543013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:05:40.543018 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:05:40.543023 | orchestrator | 2026-01-28 01:05:40.543028 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-01-28 01:05:40.543033 | orchestrator | Wednesday 28 January 2026 01:04:12 +0000 (0:00:01.775) 0:01:25.108 ***** 2026-01-28 01:05:40.543038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-28 01:05:40.543091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-28 01:05:40.543101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-01-28 01:05:40.543107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.543112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.543117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-01-28 01:05:40.543144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.543151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.543156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.543165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.543170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.543175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.543180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.543192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.543197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.543202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.543249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.543256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:05:40.543261 | orchestrator | 2026-01-28 01:05:40.543266 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-01-28 01:05:40.543271 | orchestrator | Wednesday 28 January 2026 01:04:17 +0000 (0:00:04.514) 0:01:29.623 ***** 2026-01-28 01:05:40.543276 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:05:40.543281 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:05:40.543286 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:05:40.543296 | orchestrator | 2026-01-28 01:05:40.543301 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-01-28 01:05:40.543306 | orchestrator | Wednesday 28 January 2026 01:04:17 +0000 (0:00:00.610) 0:01:30.234 ***** 2026-01-28 01:05:40.543311 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-01-28 01:05:40.543315 | orchestrator | 2026-01-28 01:05:40.543320 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-01-28 01:05:40.543325 | orchestrator | Wednesday 28 January 2026 01:04:19 +0000 (0:00:02.030) 0:01:32.264 ***** 2026-01-28 01:05:40.543330 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-28 01:05:40.543335 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-01-28 01:05:40.543340 | orchestrator | 2026-01-28 01:05:40.543345 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-01-28 01:05:40.543349 | orchestrator | Wednesday 28 January 2026 01:04:22 +0000 (0:00:02.526) 0:01:34.790 ***** 2026-01-28 01:05:40.543354 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:05:40.543359 | orchestrator | 2026-01-28 01:05:40.543364 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-28 01:05:40.543369 | orchestrator | Wednesday 28 January 2026 01:04:37 +0000 (0:00:15.559) 0:01:50.350 ***** 2026-01-28 01:05:40.543373 | orchestrator | 2026-01-28 01:05:40.543378 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-28 01:05:40.543383 | orchestrator | Wednesday 28 January 2026 01:04:38 +0000 (0:00:00.613) 0:01:50.964 ***** 2026-01-28 01:05:40.543388 | orchestrator | 2026-01-28 01:05:40.543392 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-01-28 01:05:40.543400 | orchestrator | Wednesday 28 January 2026 01:04:38 +0000 (0:00:00.156) 0:01:51.120 ***** 2026-01-28 01:05:40.543405 | orchestrator | 2026-01-28 01:05:40.543410 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-01-28 01:05:40.543415 | orchestrator | Wednesday 28 January 2026 01:04:38 +0000 (0:00:00.127) 0:01:51.248 ***** 2026-01-28 01:05:40.543420 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:05:40.543425 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:05:40.543430 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:05:40.543434 | orchestrator | 2026-01-28 01:05:40.543439 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-01-28 01:05:40.543444 | orchestrator | Wednesday 28 January 2026 01:04:47 +0000 (0:00:09.012) 0:02:00.260 ***** 2026-01-28 01:05:40.543449 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:05:40.543454 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:05:40.543459 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:05:40.543463 | orchestrator | 2026-01-28 01:05:40.543468 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-01-28 01:05:40.543473 | orchestrator | Wednesday 28 January 2026 01:04:55 +0000 (0:00:07.257) 0:02:07.518 ***** 2026-01-28 01:05:40.543478 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:05:40.543482 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:05:40.543487 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:05:40.543492 | orchestrator | 2026-01-28 01:05:40.543497 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-01-28 01:05:40.543502 | orchestrator | Wednesday 28 January 2026 01:05:05 +0000 (0:00:10.663) 0:02:18.181 ***** 2026-01-28 01:05:40.543506 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:05:40.543511 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:05:40.543516 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:05:40.543521 | orchestrator | 2026-01-28 01:05:40.543526 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-01-28 01:05:40.543530 | orchestrator | Wednesday 28 January 2026 01:05:17 +0000 (0:00:11.256) 0:02:29.438 ***** 2026-01-28 01:05:40.543535 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:05:40.543540 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:05:40.543545 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:05:40.543550 | orchestrator | 2026-01-28 01:05:40.543560 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-01-28 01:05:40.543568 | orchestrator | Wednesday 28 January 2026 01:05:25 +0000 (0:00:08.812) 0:02:38.250 ***** 2026-01-28 01:05:40.543574 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:05:40.543578 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:05:40.543583 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:05:40.543588 | orchestrator | 2026-01-28 01:05:40.543593 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-01-28 01:05:40.543598 | orchestrator | Wednesday 28 January 2026 01:05:30 +0000 (0:00:05.099) 0:02:43.349 ***** 2026-01-28 01:05:40.543602 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:05:40.543607 | orchestrator | 2026-01-28 01:05:40.543612 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:05:40.543617 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-28 01:05:40.543623 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-28 01:05:40.543628 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-28 01:05:40.543633 | orchestrator | 2026-01-28 01:05:40.543638 | orchestrator | 2026-01-28 01:05:40.543643 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:05:40.543654 | orchestrator | Wednesday 28 January 2026 01:05:37 +0000 (0:00:06.702) 0:02:50.052 ***** 2026-01-28 01:05:40.543659 | orchestrator | =============================================================================== 2026-01-28 01:05:40.543664 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.68s 2026-01-28 01:05:40.543669 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.56s 2026-01-28 01:05:40.543673 | orchestrator | designate : Restart designate-producer container ----------------------- 11.26s 2026-01-28 01:05:40.543678 | orchestrator | designate : Restart designate-central container ------------------------ 10.66s 2026-01-28 01:05:40.543683 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.01s 2026-01-28 01:05:40.543688 | orchestrator | designate : Restart designate-mdns container ---------------------------- 8.81s 2026-01-28 01:05:40.543692 | orchestrator | designate : Restart designate-api container ----------------------------- 7.26s 2026-01-28 01:05:40.543697 | orchestrator | designate : Copying over config.json files for services ----------------- 6.91s 2026-01-28 01:05:40.543702 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.70s 2026-01-28 01:05:40.543707 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.19s 2026-01-28 01:05:40.543712 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 5.97s 2026-01-28 01:05:40.543717 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.78s 2026-01-28 01:05:40.543721 | orchestrator | designate : Restart designate-worker container -------------------------- 5.10s 2026-01-28 01:05:40.543726 | orchestrator | designate : Check designate containers ---------------------------------- 4.51s 2026-01-28 01:05:40.543731 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.31s 2026-01-28 01:05:40.543736 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.07s 2026-01-28 01:05:40.543740 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.71s 2026-01-28 01:05:40.543748 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.51s 2026-01-28 01:05:40.543753 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.48s 2026-01-28 01:05:40.543758 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.28s 2026-01-28 01:05:40.543763 | orchestrator | 2026-01-28 01:05:40 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:40.543773 | orchestrator | 2026-01-28 01:05:40 | INFO  | Task 8c70af78-5766-4ca2-8950-a191d3874c13 is in state STARTED 2026-01-28 01:05:40.543778 | orchestrator | 2026-01-28 01:05:40 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:40.543783 | orchestrator | 2026-01-28 01:05:40 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:05:40.543788 | orchestrator | 2026-01-28 01:05:40 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:43.581087 | orchestrator | 2026-01-28 01:05:43 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:43.581200 | orchestrator | 2026-01-28 01:05:43 | INFO  | Task 8c70af78-5766-4ca2-8950-a191d3874c13 is in state STARTED 2026-01-28 01:05:43.581217 | orchestrator | 2026-01-28 01:05:43 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:43.581676 | orchestrator | 2026-01-28 01:05:43 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:05:43.581699 | orchestrator | 2026-01-28 01:05:43 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:46.627319 | orchestrator | 2026-01-28 01:05:46 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:46.628425 | orchestrator | 2026-01-28 01:05:46 | INFO  | Task 8c70af78-5766-4ca2-8950-a191d3874c13 is in state STARTED 2026-01-28 01:05:46.628517 | orchestrator | 2026-01-28 01:05:46 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:46.629147 | orchestrator | 2026-01-28 01:05:46 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:05:46.629175 | orchestrator | 2026-01-28 01:05:46 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:49.647613 | orchestrator | 2026-01-28 01:05:49 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:49.647988 | orchestrator | 2026-01-28 01:05:49 | INFO  | Task 8c70af78-5766-4ca2-8950-a191d3874c13 is in state STARTED 2026-01-28 01:05:49.649113 | orchestrator | 2026-01-28 01:05:49 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:49.649568 | orchestrator | 2026-01-28 01:05:49 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:05:49.649749 | orchestrator | 2026-01-28 01:05:49 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:52.683786 | orchestrator | 2026-01-28 01:05:52 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:52.684272 | orchestrator | 2026-01-28 01:05:52 | INFO  | Task 8c70af78-5766-4ca2-8950-a191d3874c13 is in state STARTED 2026-01-28 01:05:52.685527 | orchestrator | 2026-01-28 01:05:52 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:52.686383 | orchestrator | 2026-01-28 01:05:52 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:05:52.686460 | orchestrator | 2026-01-28 01:05:52 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:55.733546 | orchestrator | 2026-01-28 01:05:55 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:55.733853 | orchestrator | 2026-01-28 01:05:55 | INFO  | Task 8c70af78-5766-4ca2-8950-a191d3874c13 is in state STARTED 2026-01-28 01:05:55.734478 | orchestrator | 2026-01-28 01:05:55 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:55.735196 | orchestrator | 2026-01-28 01:05:55 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:05:55.735239 | orchestrator | 2026-01-28 01:05:55 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:05:58.776888 | orchestrator | 2026-01-28 01:05:58 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:05:58.776977 | orchestrator | 2026-01-28 01:05:58 | INFO  | Task 8c70af78-5766-4ca2-8950-a191d3874c13 is in state STARTED 2026-01-28 01:05:58.776996 | orchestrator | 2026-01-28 01:05:58 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:05:58.777033 | orchestrator | 2026-01-28 01:05:58 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:05:58.777050 | orchestrator | 2026-01-28 01:05:58 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:01.798987 | orchestrator | 2026-01-28 01:06:01 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:06:01.799905 | orchestrator | 2026-01-28 01:06:01 | INFO  | Task 8c70af78-5766-4ca2-8950-a191d3874c13 is in state STARTED 2026-01-28 01:06:01.800372 | orchestrator | 2026-01-28 01:06:01 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:01.801505 | orchestrator | 2026-01-28 01:06:01 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:01.801537 | orchestrator | 2026-01-28 01:06:01 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:04.835217 | orchestrator | 2026-01-28 01:06:04 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:06:04.835381 | orchestrator | 2026-01-28 01:06:04 | INFO  | Task 8c70af78-5766-4ca2-8950-a191d3874c13 is in state STARTED 2026-01-28 01:06:04.836343 | orchestrator | 2026-01-28 01:06:04 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:04.837236 | orchestrator | 2026-01-28 01:06:04 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:04.837343 | orchestrator | 2026-01-28 01:06:04 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:07.870937 | orchestrator | 2026-01-28 01:06:07 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:06:07.873353 | orchestrator | 2026-01-28 01:06:07 | INFO  | Task 8c70af78-5766-4ca2-8950-a191d3874c13 is in state STARTED 2026-01-28 01:06:07.876385 | orchestrator | 2026-01-28 01:06:07 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:07.877555 | orchestrator | 2026-01-28 01:06:07 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:07.879744 | orchestrator | 2026-01-28 01:06:07 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:10.924756 | orchestrator | 2026-01-28 01:06:10 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:06:10.926324 | orchestrator | 2026-01-28 01:06:10 | INFO  | Task 8c70af78-5766-4ca2-8950-a191d3874c13 is in state STARTED 2026-01-28 01:06:10.927951 | orchestrator | 2026-01-28 01:06:10 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:10.931312 | orchestrator | 2026-01-28 01:06:10 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:10.931358 | orchestrator | 2026-01-28 01:06:10 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:13.979189 | orchestrator | 2026-01-28 01:06:13 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:06:13.985827 | orchestrator | 2026-01-28 01:06:13 | INFO  | Task 8c70af78-5766-4ca2-8950-a191d3874c13 is in state STARTED 2026-01-28 01:06:13.988927 | orchestrator | 2026-01-28 01:06:13 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:13.991847 | orchestrator | 2026-01-28 01:06:13 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:13.991886 | orchestrator | 2026-01-28 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:17.055014 | orchestrator | 2026-01-28 01:06:17 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:06:17.055638 | orchestrator | 2026-01-28 01:06:17 | INFO  | Task 8c70af78-5766-4ca2-8950-a191d3874c13 is in state SUCCESS 2026-01-28 01:06:17.057000 | orchestrator | 2026-01-28 01:06:17 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:17.058415 | orchestrator | 2026-01-28 01:06:17 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:17.058453 | orchestrator | 2026-01-28 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:20.086279 | orchestrator | 2026-01-28 01:06:20 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:06:20.086365 | orchestrator | 2026-01-28 01:06:20 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:20.087905 | orchestrator | 2026-01-28 01:06:20 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:20.088379 | orchestrator | 2026-01-28 01:06:20 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:06:20.088428 | orchestrator | 2026-01-28 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:23.109532 | orchestrator | 2026-01-28 01:06:23 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:06:23.109665 | orchestrator | 2026-01-28 01:06:23 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:23.109860 | orchestrator | 2026-01-28 01:06:23 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:23.110363 | orchestrator | 2026-01-28 01:06:23 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:06:23.110387 | orchestrator | 2026-01-28 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:26.141882 | orchestrator | 2026-01-28 01:06:26 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:06:26.143757 | orchestrator | 2026-01-28 01:06:26 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:26.146506 | orchestrator | 2026-01-28 01:06:26 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:26.147713 | orchestrator | 2026-01-28 01:06:26 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:06:26.147741 | orchestrator | 2026-01-28 01:06:26 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:29.179953 | orchestrator | 2026-01-28 01:06:29 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:06:29.184310 | orchestrator | 2026-01-28 01:06:29 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:29.193115 | orchestrator | 2026-01-28 01:06:29 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:29.195438 | orchestrator | 2026-01-28 01:06:29 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:06:29.195476 | orchestrator | 2026-01-28 01:06:29 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:32.245719 | orchestrator | 2026-01-28 01:06:32 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:06:32.247029 | orchestrator | 2026-01-28 01:06:32 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:32.247750 | orchestrator | 2026-01-28 01:06:32 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:32.250255 | orchestrator | 2026-01-28 01:06:32 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:06:32.250297 | orchestrator | 2026-01-28 01:06:32 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:35.299511 | orchestrator | 2026-01-28 01:06:35 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:06:35.300226 | orchestrator | 2026-01-28 01:06:35 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:35.301309 | orchestrator | 2026-01-28 01:06:35 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:35.302364 | orchestrator | 2026-01-28 01:06:35 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:06:35.302545 | orchestrator | 2026-01-28 01:06:35 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:38.330986 | orchestrator | 2026-01-28 01:06:38 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:06:38.331710 | orchestrator | 2026-01-28 01:06:38 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:38.333811 | orchestrator | 2026-01-28 01:06:38 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:38.336830 | orchestrator | 2026-01-28 01:06:38 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:06:38.337081 | orchestrator | 2026-01-28 01:06:38 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:41.391134 | orchestrator | 2026-01-28 01:06:41 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:06:41.391226 | orchestrator | 2026-01-28 01:06:41 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:41.391240 | orchestrator | 2026-01-28 01:06:41 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:41.392970 | orchestrator | 2026-01-28 01:06:41 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:06:41.393025 | orchestrator | 2026-01-28 01:06:41 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:44.444346 | orchestrator | 2026-01-28 01:06:44 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:06:44.447076 | orchestrator | 2026-01-28 01:06:44 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:44.448431 | orchestrator | 2026-01-28 01:06:44 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:44.450116 | orchestrator | 2026-01-28 01:06:44 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:06:44.450583 | orchestrator | 2026-01-28 01:06:44 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:47.486991 | orchestrator | 2026-01-28 01:06:47 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state STARTED 2026-01-28 01:06:47.490365 | orchestrator | 2026-01-28 01:06:47 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:47.490634 | orchestrator | 2026-01-28 01:06:47 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:47.491527 | orchestrator | 2026-01-28 01:06:47 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:06:47.491584 | orchestrator | 2026-01-28 01:06:47 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:50.520314 | orchestrator | 2026-01-28 01:06:50 | INFO  | Task 92168016-92ba-4246-b4a1-f57264c3564e is in state SUCCESS 2026-01-28 01:06:50.523048 | orchestrator | 2026-01-28 01:06:50.523100 | orchestrator | 2026-01-28 01:06:50.523108 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 01:06:50.523115 | orchestrator | 2026-01-28 01:06:50.523121 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 01:06:50.523127 | orchestrator | Wednesday 28 January 2026 01:05:42 +0000 (0:00:00.255) 0:00:00.255 ***** 2026-01-28 01:06:50.523133 | orchestrator | ok: [testbed-manager] 2026-01-28 01:06:50.523140 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:06:50.523146 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:06:50.523152 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:06:50.523157 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:06:50.523163 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:06:50.523168 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:06:50.523173 | orchestrator | 2026-01-28 01:06:50.523179 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 01:06:50.523185 | orchestrator | Wednesday 28 January 2026 01:05:43 +0000 (0:00:00.771) 0:00:01.026 ***** 2026-01-28 01:06:50.523190 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-01-28 01:06:50.523196 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-01-28 01:06:50.523202 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-01-28 01:06:50.523207 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-01-28 01:06:50.523212 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-01-28 01:06:50.523218 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-01-28 01:06:50.523223 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-01-28 01:06:50.523228 | orchestrator | 2026-01-28 01:06:50.523234 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-01-28 01:06:50.523239 | orchestrator | 2026-01-28 01:06:50.523244 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-01-28 01:06:50.523250 | orchestrator | Wednesday 28 January 2026 01:05:44 +0000 (0:00:00.784) 0:00:01.811 ***** 2026-01-28 01:06:50.523256 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 01:06:50.523263 | orchestrator | 2026-01-28 01:06:50.523269 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-01-28 01:06:50.523274 | orchestrator | Wednesday 28 January 2026 01:05:49 +0000 (0:00:04.628) 0:00:06.440 ***** 2026-01-28 01:06:50.523280 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-01-28 01:06:50.523285 | orchestrator | 2026-01-28 01:06:50.523290 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-01-28 01:06:50.523296 | orchestrator | Wednesday 28 January 2026 01:05:52 +0000 (0:00:03.704) 0:00:10.144 ***** 2026-01-28 01:06:50.523302 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-01-28 01:06:50.523309 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-01-28 01:06:50.523315 | orchestrator | 2026-01-28 01:06:50.523320 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-01-28 01:06:50.523326 | orchestrator | Wednesday 28 January 2026 01:05:59 +0000 (0:00:06.250) 0:00:16.394 ***** 2026-01-28 01:06:50.523331 | orchestrator | ok: [testbed-manager] => (item=service) 2026-01-28 01:06:50.523337 | orchestrator | 2026-01-28 01:06:50.523342 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-01-28 01:06:50.523347 | orchestrator | Wednesday 28 January 2026 01:06:02 +0000 (0:00:03.379) 0:00:19.774 ***** 2026-01-28 01:06:50.523372 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-28 01:06:50.523378 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-01-28 01:06:50.523383 | orchestrator | 2026-01-28 01:06:50.523388 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-01-28 01:06:50.523394 | orchestrator | Wednesday 28 January 2026 01:06:05 +0000 (0:00:03.281) 0:00:23.056 ***** 2026-01-28 01:06:50.523409 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-01-28 01:06:50.523414 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-01-28 01:06:50.523420 | orchestrator | 2026-01-28 01:06:50.523425 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-01-28 01:06:50.523430 | orchestrator | Wednesday 28 January 2026 01:06:11 +0000 (0:00:05.411) 0:00:28.467 ***** 2026-01-28 01:06:50.523463 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-01-28 01:06:50.523468 | orchestrator | 2026-01-28 01:06:50.523476 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:06:50.523485 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:06:50.523493 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:06:50.523502 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:06:50.523508 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:06:50.523513 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:06:50.523529 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:06:50.523535 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:06:50.523540 | orchestrator | 2026-01-28 01:06:50.523546 | orchestrator | 2026-01-28 01:06:50.523551 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:06:50.523572 | orchestrator | Wednesday 28 January 2026 01:06:15 +0000 (0:00:04.371) 0:00:32.839 ***** 2026-01-28 01:06:50.523579 | orchestrator | =============================================================================== 2026-01-28 01:06:50.523584 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.25s 2026-01-28 01:06:50.523590 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.41s 2026-01-28 01:06:50.523597 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 4.63s 2026-01-28 01:06:50.523606 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.37s 2026-01-28 01:06:50.523611 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.70s 2026-01-28 01:06:50.523618 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.38s 2026-01-28 01:06:50.523624 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.28s 2026-01-28 01:06:50.523630 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.78s 2026-01-28 01:06:50.523637 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.77s 2026-01-28 01:06:50.523643 | orchestrator | 2026-01-28 01:06:50.523649 | orchestrator | 2026-01-28 01:06:50.523656 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 01:06:50.523662 | orchestrator | 2026-01-28 01:06:50.523668 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 01:06:50.523682 | orchestrator | Wednesday 28 January 2026 01:04:56 +0000 (0:00:00.687) 0:00:00.687 ***** 2026-01-28 01:06:50.523695 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:06:50.523701 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:06:50.523707 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:06:50.523713 | orchestrator | 2026-01-28 01:06:50.523719 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 01:06:50.523726 | orchestrator | Wednesday 28 January 2026 01:04:56 +0000 (0:00:00.628) 0:00:01.316 ***** 2026-01-28 01:06:50.523732 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-01-28 01:06:50.523738 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-01-28 01:06:50.523745 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-01-28 01:06:50.523751 | orchestrator | 2026-01-28 01:06:50.523757 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-01-28 01:06:50.523764 | orchestrator | 2026-01-28 01:06:50.523809 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-28 01:06:50.523816 | orchestrator | Wednesday 28 January 2026 01:04:57 +0000 (0:00:00.559) 0:00:01.875 ***** 2026-01-28 01:06:50.523822 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:06:50.523829 | orchestrator | 2026-01-28 01:06:50.523835 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-01-28 01:06:50.523842 | orchestrator | Wednesday 28 January 2026 01:04:58 +0000 (0:00:00.880) 0:00:02.755 ***** 2026-01-28 01:06:50.523848 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-01-28 01:06:50.523854 | orchestrator | 2026-01-28 01:06:50.523861 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-01-28 01:06:50.523867 | orchestrator | Wednesday 28 January 2026 01:05:02 +0000 (0:00:03.773) 0:00:06.529 ***** 2026-01-28 01:06:50.523874 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-01-28 01:06:50.523880 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-01-28 01:06:50.523886 | orchestrator | 2026-01-28 01:06:50.523896 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-01-28 01:06:50.523902 | orchestrator | Wednesday 28 January 2026 01:05:08 +0000 (0:00:06.709) 0:00:13.238 ***** 2026-01-28 01:06:50.523907 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-28 01:06:50.523913 | orchestrator | 2026-01-28 01:06:50.523918 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-01-28 01:06:50.523923 | orchestrator | Wednesday 28 January 2026 01:05:12 +0000 (0:00:03.356) 0:00:16.594 ***** 2026-01-28 01:06:50.523929 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-28 01:06:50.523934 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-01-28 01:06:50.523940 | orchestrator | 2026-01-28 01:06:50.523945 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-01-28 01:06:50.523951 | orchestrator | Wednesday 28 January 2026 01:05:15 +0000 (0:00:03.306) 0:00:19.901 ***** 2026-01-28 01:06:50.523956 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-28 01:06:50.523961 | orchestrator | 2026-01-28 01:06:50.523967 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-01-28 01:06:50.523972 | orchestrator | Wednesday 28 January 2026 01:05:18 +0000 (0:00:03.473) 0:00:23.374 ***** 2026-01-28 01:06:50.523978 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-01-28 01:06:50.523983 | orchestrator | 2026-01-28 01:06:50.523989 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-01-28 01:06:50.523994 | orchestrator | Wednesday 28 January 2026 01:05:22 +0000 (0:00:03.850) 0:00:27.224 ***** 2026-01-28 01:06:50.524000 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:06:50.524005 | orchestrator | 2026-01-28 01:06:50.524010 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-01-28 01:06:50.524020 | orchestrator | Wednesday 28 January 2026 01:05:26 +0000 (0:00:03.532) 0:00:30.757 ***** 2026-01-28 01:06:50.524030 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:06:50.524036 | orchestrator | 2026-01-28 01:06:50.524041 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-01-28 01:06:50.524047 | orchestrator | Wednesday 28 January 2026 01:05:29 +0000 (0:00:03.436) 0:00:34.194 ***** 2026-01-28 01:06:50.524052 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:06:50.524058 | orchestrator | 2026-01-28 01:06:50.524063 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-01-28 01:06:50.524068 | orchestrator | Wednesday 28 January 2026 01:05:32 +0000 (0:00:02.946) 0:00:37.140 ***** 2026-01-28 01:06:50.524076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 01:06:50.524086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 01:06:50.524095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 01:06:50.524102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:06:50.524116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:06:50.524123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:06:50.524128 | orchestrator | 2026-01-28 01:06:50.524134 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-01-28 01:06:50.524139 | orchestrator | Wednesday 28 January 2026 01:05:33 +0000 (0:00:01.208) 0:00:38.348 ***** 2026-01-28 01:06:50.524145 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:06:50.524150 | orchestrator | 2026-01-28 01:06:50.524155 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-01-28 01:06:50.524161 | orchestrator | Wednesday 28 January 2026 01:05:34 +0000 (0:00:00.155) 0:00:38.504 ***** 2026-01-28 01:06:50.524166 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:06:50.524172 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:06:50.524177 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:06:50.524182 | orchestrator | 2026-01-28 01:06:50.524187 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-01-28 01:06:50.524193 | orchestrator | Wednesday 28 January 2026 01:05:34 +0000 (0:00:00.584) 0:00:39.089 ***** 2026-01-28 01:06:50.524198 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-28 01:06:50.524204 | orchestrator | 2026-01-28 01:06:50.524209 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-01-28 01:06:50.524214 | orchestrator | Wednesday 28 January 2026 01:05:35 +0000 (0:00:01.270) 0:00:40.360 ***** 2026-01-28 01:06:50.524229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 01:06:50.524235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 01:06:50.524249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 01:06:50.524255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:06:50.524261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:06:50.524269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:06:50.524279 | orchestrator | 2026-01-28 01:06:50.524284 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-01-28 01:06:50.524290 | orchestrator | Wednesday 28 January 2026 01:05:38 +0000 (0:00:02.320) 0:00:42.680 ***** 2026-01-28 01:06:50.524295 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:06:50.524301 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:06:50.524306 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:06:50.524312 | orchestrator | 2026-01-28 01:06:50.524317 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-28 01:06:50.524322 | orchestrator | Wednesday 28 January 2026 01:05:38 +0000 (0:00:00.296) 0:00:42.977 ***** 2026-01-28 01:06:50.524328 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:06:50.524333 | orchestrator | 2026-01-28 01:06:50.524339 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-01-28 01:06:50.524344 | orchestrator | Wednesday 28 January 2026 01:05:39 +0000 (0:00:00.679) 0:00:43.656 ***** 2026-01-28 01:06:50.524354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 01:06:50.524360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 01:06:50.524366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 01:06:50.524375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:06:50.524384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:06:50.524394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:06:50.524399 | orchestrator | 2026-01-28 01:06:50.524405 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-01-28 01:06:50.524410 | orchestrator | Wednesday 28 January 2026 01:05:41 +0000 (0:00:02.438) 0:00:46.094 ***** 2026-01-28 01:06:50.524416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-28 01:06:50.524422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-28 01:06:50.524431 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:06:50.524440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-28 01:06:50.524448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-28 01:06:50.524454 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:06:50.524460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-28 01:06:50.524466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-28 01:06:50.524471 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:06:50.524477 | orchestrator | 2026-01-28 01:06:50.524482 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-01-28 01:06:50.524487 | orchestrator | Wednesday 28 January 2026 01:05:42 +0000 (0:00:00.566) 0:00:46.661 ***** 2026-01-28 01:06:50.524496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-28 01:06:50.524505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-28 01:06:50.524511 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:06:50.524718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-28 01:06:50.524730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-28 01:06:50.524739 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:06:50.524747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-28 01:06:50.524763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-28 01:06:50.524783 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:06:50.524789 | orchestrator | 2026-01-28 01:06:50.524794 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-01-28 01:06:50.524800 | orchestrator | Wednesday 28 January 2026 01:05:43 +0000 (0:00:01.076) 0:00:47.738 ***** 2026-01-28 01:06:50.524806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 01:06:50.524816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 01:06:50.524822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 01:06:50.524832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:06:50.524841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:06:50.524847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:06:50.524852 | orchestrator | 2026-01-28 01:06:50.524860 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-01-28 01:06:50.524866 | orchestrator | Wednesday 28 January 2026 01:05:45 +0000 (0:00:02.503) 0:00:50.242 ***** 2026-01-28 01:06:50.524871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 01:06:50.524877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 01:06:50.524892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 01:06:50.524898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:06:50.524908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:06:50.524914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:06:50.524920 | orchestrator | 2026-01-28 01:06:50.524926 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-01-28 01:06:50.524931 | orchestrator | Wednesday 28 January 2026 01:05:56 +0000 (0:00:10.910) 0:01:01.153 ***** 2026-01-28 01:06:50.524941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-28 01:06:50.524950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-28 01:06:50.524955 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:06:50.524976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-28 01:06:50.524987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-28 01:06:50.524993 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:06:50.524999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-01-28 01:06:50.525008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-01-28 01:06:50.525014 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:06:50.525019 | orchestrator | 2026-01-28 01:06:50.525025 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-01-28 01:06:50.525030 | orchestrator | Wednesday 28 January 2026 01:05:58 +0000 (0:00:01.887) 0:01:03.040 ***** 2026-01-28 01:06:50.525039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 01:06:50.525045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:06:50.525054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 01:06:50.525063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-01-28 01:06:50.525069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:06:50.525077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:06:50.525083 | orchestrator | 2026-01-28 01:06:50.525089 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-01-28 01:06:50.525094 | orchestrator | Wednesday 28 January 2026 01:06:02 +0000 (0:00:03.969) 0:01:07.009 ***** 2026-01-28 01:06:50.525100 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:06:50.525105 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:06:50.525110 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:06:50.525116 | orchestrator | 2026-01-28 01:06:50.525121 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-01-28 01:06:50.525127 | orchestrator | Wednesday 28 January 2026 01:06:03 +0000 (0:00:00.395) 0:01:07.405 ***** 2026-01-28 01:06:50.525132 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:06:50.525138 | orchestrator | 2026-01-28 01:06:50.525143 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-01-28 01:06:50.525148 | orchestrator | Wednesday 28 January 2026 01:06:04 +0000 (0:00:01.804) 0:01:09.209 ***** 2026-01-28 01:06:50.525154 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:06:50.525159 | orchestrator | 2026-01-28 01:06:50.525164 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-01-28 01:06:50.525172 | orchestrator | Wednesday 28 January 2026 01:06:06 +0000 (0:00:02.041) 0:01:11.251 ***** 2026-01-28 01:06:50.525178 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:06:50.525183 | orchestrator | 2026-01-28 01:06:50.525193 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-28 01:06:50.525198 | orchestrator | Wednesday 28 January 2026 01:06:19 +0000 (0:00:12.981) 0:01:24.232 ***** 2026-01-28 01:06:50.525203 | orchestrator | 2026-01-28 01:06:50.525209 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-28 01:06:50.525214 | orchestrator | Wednesday 28 January 2026 01:06:19 +0000 (0:00:00.102) 0:01:24.334 ***** 2026-01-28 01:06:50.525219 | orchestrator | 2026-01-28 01:06:50.525225 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-01-28 01:06:50.525230 | orchestrator | Wednesday 28 January 2026 01:06:20 +0000 (0:00:00.100) 0:01:24.435 ***** 2026-01-28 01:06:50.525236 | orchestrator | 2026-01-28 01:06:50.525241 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-01-28 01:06:50.525246 | orchestrator | Wednesday 28 January 2026 01:06:20 +0000 (0:00:00.106) 0:01:24.541 ***** 2026-01-28 01:06:50.525252 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:06:50.525257 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:06:50.525262 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:06:50.525268 | orchestrator | 2026-01-28 01:06:50.525273 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-01-28 01:06:50.525278 | orchestrator | Wednesday 28 January 2026 01:06:34 +0000 (0:00:14.162) 0:01:38.704 ***** 2026-01-28 01:06:50.525284 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:06:50.525289 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:06:50.525295 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:06:50.525300 | orchestrator | 2026-01-28 01:06:50.525305 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:06:50.525311 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-01-28 01:06:50.525317 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-28 01:06:50.525323 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-28 01:06:50.525328 | orchestrator | 2026-01-28 01:06:50.525334 | orchestrator | 2026-01-28 01:06:50.525340 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:06:50.525347 | orchestrator | Wednesday 28 January 2026 01:06:49 +0000 (0:00:14.824) 0:01:53.528 ***** 2026-01-28 01:06:50.525353 | orchestrator | =============================================================================== 2026-01-28 01:06:50.525359 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.82s 2026-01-28 01:06:50.525366 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.16s 2026-01-28 01:06:50.525373 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 12.98s 2026-01-28 01:06:50.525379 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 10.91s 2026-01-28 01:06:50.525385 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.71s 2026-01-28 01:06:50.525391 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.97s 2026-01-28 01:06:50.525397 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.85s 2026-01-28 01:06:50.525404 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.77s 2026-01-28 01:06:50.525410 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.53s 2026-01-28 01:06:50.525416 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.47s 2026-01-28 01:06:50.525422 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.44s 2026-01-28 01:06:50.525432 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.36s 2026-01-28 01:06:50.525438 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.31s 2026-01-28 01:06:50.525448 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 2.95s 2026-01-28 01:06:50.525455 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.50s 2026-01-28 01:06:50.525461 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.44s 2026-01-28 01:06:50.525467 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.32s 2026-01-28 01:06:50.525473 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.04s 2026-01-28 01:06:50.525480 | orchestrator | magnum : Copying over existing policy file ------------------------------ 1.89s 2026-01-28 01:06:50.525486 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.80s 2026-01-28 01:06:50.525493 | orchestrator | 2026-01-28 01:06:50 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:50.525499 | orchestrator | 2026-01-28 01:06:50 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:50.525506 | orchestrator | 2026-01-28 01:06:50 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:06:50.525512 | orchestrator | 2026-01-28 01:06:50 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:53.562460 | orchestrator | 2026-01-28 01:06:53 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:06:53.562939 | orchestrator | 2026-01-28 01:06:53 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:53.563507 | orchestrator | 2026-01-28 01:06:53 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:53.564469 | orchestrator | 2026-01-28 01:06:53 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:06:53.564505 | orchestrator | 2026-01-28 01:06:53 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:56.595142 | orchestrator | 2026-01-28 01:06:56 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:06:56.595993 | orchestrator | 2026-01-28 01:06:56 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:56.596783 | orchestrator | 2026-01-28 01:06:56 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:56.598887 | orchestrator | 2026-01-28 01:06:56 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:06:56.599635 | orchestrator | 2026-01-28 01:06:56 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:06:59.633697 | orchestrator | 2026-01-28 01:06:59 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:06:59.633887 | orchestrator | 2026-01-28 01:06:59 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:06:59.635590 | orchestrator | 2026-01-28 01:06:59 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:06:59.636838 | orchestrator | 2026-01-28 01:06:59 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:06:59.636902 | orchestrator | 2026-01-28 01:06:59 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:02.672844 | orchestrator | 2026-01-28 01:07:02 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:02.674388 | orchestrator | 2026-01-28 01:07:02 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:07:02.674426 | orchestrator | 2026-01-28 01:07:02 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:02.677399 | orchestrator | 2026-01-28 01:07:02 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:02.677453 | orchestrator | 2026-01-28 01:07:02 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:05.708579 | orchestrator | 2026-01-28 01:07:05 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:05.709132 | orchestrator | 2026-01-28 01:07:05 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:07:05.709839 | orchestrator | 2026-01-28 01:07:05 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:05.711028 | orchestrator | 2026-01-28 01:07:05 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:05.711067 | orchestrator | 2026-01-28 01:07:05 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:08.772874 | orchestrator | 2026-01-28 01:07:08 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:08.772943 | orchestrator | 2026-01-28 01:07:08 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state STARTED 2026-01-28 01:07:08.773642 | orchestrator | 2026-01-28 01:07:08 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:08.774405 | orchestrator | 2026-01-28 01:07:08 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:08.774435 | orchestrator | 2026-01-28 01:07:08 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:11.803270 | orchestrator | 2026-01-28 01:07:11 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:11.805192 | orchestrator | 2026-01-28 01:07:11 | INFO  | Task 55e8855d-3ea2-469f-9e3a-d536f6327f19 is in state SUCCESS 2026-01-28 01:07:11.806650 | orchestrator | 2026-01-28 01:07:11.806697 | orchestrator | 2026-01-28 01:07:11.806718 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 01:07:11.806943 | orchestrator | 2026-01-28 01:07:11.806966 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 01:07:11.806978 | orchestrator | Wednesday 28 January 2026 01:02:47 +0000 (0:00:00.234) 0:00:00.234 ***** 2026-01-28 01:07:11.806989 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:07:11.807002 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:07:11.807012 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:07:11.807023 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:07:11.807034 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:07:11.807044 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:07:11.807055 | orchestrator | 2026-01-28 01:07:11.807066 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 01:07:11.807077 | orchestrator | Wednesday 28 January 2026 01:02:48 +0000 (0:00:00.613) 0:00:00.847 ***** 2026-01-28 01:07:11.807088 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-01-28 01:07:11.807099 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-01-28 01:07:11.807111 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-01-28 01:07:11.807122 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-01-28 01:07:11.807133 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-01-28 01:07:11.807143 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-01-28 01:07:11.807154 | orchestrator | 2026-01-28 01:07:11.807165 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-01-28 01:07:11.807176 | orchestrator | 2026-01-28 01:07:11.807186 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-28 01:07:11.807197 | orchestrator | Wednesday 28 January 2026 01:02:49 +0000 (0:00:00.545) 0:00:01.393 ***** 2026-01-28 01:07:11.807209 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 01:07:11.807221 | orchestrator | 2026-01-28 01:07:11.807258 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-01-28 01:07:11.807272 | orchestrator | Wednesday 28 January 2026 01:02:50 +0000 (0:00:01.000) 0:00:02.394 ***** 2026-01-28 01:07:11.807284 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:07:11.807298 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:07:11.807310 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:07:11.807323 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:07:11.807335 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:07:11.807347 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:07:11.807360 | orchestrator | 2026-01-28 01:07:11.807373 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-01-28 01:07:11.807404 | orchestrator | Wednesday 28 January 2026 01:02:51 +0000 (0:00:01.046) 0:00:03.440 ***** 2026-01-28 01:07:11.807418 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:07:11.807431 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:07:11.807444 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:07:11.807505 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:07:11.807518 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:07:11.807530 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:07:11.807544 | orchestrator | 2026-01-28 01:07:11.807557 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-01-28 01:07:11.807570 | orchestrator | Wednesday 28 January 2026 01:02:51 +0000 (0:00:00.913) 0:00:04.353 ***** 2026-01-28 01:07:11.807582 | orchestrator | ok: [testbed-node-0] => { 2026-01-28 01:07:11.807596 | orchestrator |  "changed": false, 2026-01-28 01:07:11.807606 | orchestrator |  "msg": "All assertions passed" 2026-01-28 01:07:11.807618 | orchestrator | } 2026-01-28 01:07:11.807629 | orchestrator | ok: [testbed-node-1] => { 2026-01-28 01:07:11.807639 | orchestrator |  "changed": false, 2026-01-28 01:07:11.807651 | orchestrator |  "msg": "All assertions passed" 2026-01-28 01:07:11.807669 | orchestrator | } 2026-01-28 01:07:11.807686 | orchestrator | ok: [testbed-node-2] => { 2026-01-28 01:07:11.807703 | orchestrator |  "changed": false, 2026-01-28 01:07:11.807721 | orchestrator |  "msg": "All assertions passed" 2026-01-28 01:07:11.807739 | orchestrator | } 2026-01-28 01:07:11.807783 | orchestrator | ok: [testbed-node-3] => { 2026-01-28 01:07:11.807808 | orchestrator |  "changed": false, 2026-01-28 01:07:11.807826 | orchestrator |  "msg": "All assertions passed" 2026-01-28 01:07:11.807837 | orchestrator | } 2026-01-28 01:07:11.807848 | orchestrator | ok: [testbed-node-4] => { 2026-01-28 01:07:11.807858 | orchestrator |  "changed": false, 2026-01-28 01:07:11.807869 | orchestrator |  "msg": "All assertions passed" 2026-01-28 01:07:11.807880 | orchestrator | } 2026-01-28 01:07:11.807890 | orchestrator | ok: [testbed-node-5] => { 2026-01-28 01:07:11.807906 | orchestrator |  "changed": false, 2026-01-28 01:07:11.807925 | orchestrator |  "msg": "All assertions passed" 2026-01-28 01:07:11.807942 | orchestrator | } 2026-01-28 01:07:11.807960 | orchestrator | 2026-01-28 01:07:11.807978 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-01-28 01:07:11.807996 | orchestrator | Wednesday 28 January 2026 01:02:52 +0000 (0:00:00.665) 0:00:05.019 ***** 2026-01-28 01:07:11.808013 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.808030 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.808049 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.808067 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.808085 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.808122 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.808143 | orchestrator | 2026-01-28 01:07:11.808162 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-01-28 01:07:11.808182 | orchestrator | Wednesday 28 January 2026 01:02:53 +0000 (0:00:00.550) 0:00:05.570 ***** 2026-01-28 01:07:11.808201 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-01-28 01:07:11.808213 | orchestrator | 2026-01-28 01:07:11.808224 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-01-28 01:07:11.808235 | orchestrator | Wednesday 28 January 2026 01:02:56 +0000 (0:00:03.355) 0:00:08.925 ***** 2026-01-28 01:07:11.808261 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-01-28 01:07:11.808278 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-01-28 01:07:11.808290 | orchestrator | 2026-01-28 01:07:11.808318 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-01-28 01:07:11.808329 | orchestrator | Wednesday 28 January 2026 01:03:02 +0000 (0:00:05.723) 0:00:14.649 ***** 2026-01-28 01:07:11.808340 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-28 01:07:11.808351 | orchestrator | 2026-01-28 01:07:11.808362 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-01-28 01:07:11.808372 | orchestrator | Wednesday 28 January 2026 01:03:05 +0000 (0:00:03.115) 0:00:17.764 ***** 2026-01-28 01:07:11.808383 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-28 01:07:11.808394 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-01-28 01:07:11.808405 | orchestrator | 2026-01-28 01:07:11.808415 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-01-28 01:07:11.808426 | orchestrator | Wednesday 28 January 2026 01:03:08 +0000 (0:00:03.483) 0:00:21.247 ***** 2026-01-28 01:07:11.808437 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-28 01:07:11.808448 | orchestrator | 2026-01-28 01:07:11.808459 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-01-28 01:07:11.808470 | orchestrator | Wednesday 28 January 2026 01:03:12 +0000 (0:00:03.428) 0:00:24.676 ***** 2026-01-28 01:07:11.808480 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-01-28 01:07:11.808491 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-01-28 01:07:11.808502 | orchestrator | 2026-01-28 01:07:11.808513 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-28 01:07:11.808523 | orchestrator | Wednesday 28 January 2026 01:03:19 +0000 (0:00:07.341) 0:00:32.018 ***** 2026-01-28 01:07:11.808534 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.808545 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.808556 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.808566 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.808577 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.808588 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.808598 | orchestrator | 2026-01-28 01:07:11.808609 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-01-28 01:07:11.808620 | orchestrator | Wednesday 28 January 2026 01:03:20 +0000 (0:00:00.681) 0:00:32.699 ***** 2026-01-28 01:07:11.808631 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.808641 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.808652 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.808663 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.808674 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.808685 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.808695 | orchestrator | 2026-01-28 01:07:11.808706 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-01-28 01:07:11.808717 | orchestrator | Wednesday 28 January 2026 01:03:23 +0000 (0:00:02.859) 0:00:35.559 ***** 2026-01-28 01:07:11.808728 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:07:11.808739 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:07:11.808750 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:07:11.808787 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:07:11.808799 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:07:11.808809 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:07:11.808820 | orchestrator | 2026-01-28 01:07:11.808831 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-01-28 01:07:11.808842 | orchestrator | Wednesday 28 January 2026 01:03:24 +0000 (0:00:01.286) 0:00:36.845 ***** 2026-01-28 01:07:11.808852 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.808872 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.808883 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.808894 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.808904 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.808915 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.808925 | orchestrator | 2026-01-28 01:07:11.808936 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-01-28 01:07:11.808947 | orchestrator | Wednesday 28 January 2026 01:03:27 +0000 (0:00:03.238) 0:00:40.084 ***** 2026-01-28 01:07:11.808967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 01:07:11.808993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 01:07:11.809005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-28 01:07:11.809018 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-28 01:07:11.809038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 01:07:11.809055 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-28 01:07:11.809067 | orchestrator | 2026-01-28 01:07:11.809078 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-01-28 01:07:11.809089 | orchestrator | Wednesday 28 January 2026 01:03:31 +0000 (0:00:03.788) 0:00:43.872 ***** 2026-01-28 01:07:11.809100 | orchestrator | [WARNING]: Skipped 2026-01-28 01:07:11.809112 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-01-28 01:07:11.809123 | orchestrator | due to this access issue: 2026-01-28 01:07:11.809134 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-01-28 01:07:11.809145 | orchestrator | a directory 2026-01-28 01:07:11.809156 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-28 01:07:11.809167 | orchestrator | 2026-01-28 01:07:11.809184 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-28 01:07:11.809196 | orchestrator | Wednesday 28 January 2026 01:03:32 +0000 (0:00:00.772) 0:00:44.645 ***** 2026-01-28 01:07:11.809207 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 01:07:11.809219 | orchestrator | 2026-01-28 01:07:11.809230 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-01-28 01:07:11.809241 | orchestrator | Wednesday 28 January 2026 01:03:33 +0000 (0:00:01.240) 0:00:45.886 ***** 2026-01-28 01:07:11.809252 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-28 01:07:11.809264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 01:07:11.809283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 01:07:11.809299 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-28 01:07:11.809319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 01:07:11.809331 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-28 01:07:11.809357 | orchestrator | 2026-01-28 01:07:11.809368 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-01-28 01:07:11.809379 | orchestrator | Wednesday 28 January 2026 01:03:37 +0000 (0:00:04.066) 0:00:49.952 ***** 2026-01-28 01:07:11.809390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.809402 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.809414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.809425 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.809448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.809460 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.809471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.809483 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.809500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.809512 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.809523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.809534 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.809545 | orchestrator | 2026-01-28 01:07:11.809556 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-01-28 01:07:11.809567 | orchestrator | Wednesday 28 January 2026 01:03:41 +0000 (0:00:03.787) 0:00:53.739 ***** 2026-01-28 01:07:11.809583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.809594 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.809612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.809629 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.809641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.809652 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.809663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.809675 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.809686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.809697 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.809712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.809724 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.809735 | orchestrator | 2026-01-28 01:07:11.809807 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-01-28 01:07:11.809841 | orchestrator | Wednesday 28 January 2026 01:03:44 +0000 (0:00:03.330) 0:00:57.070 ***** 2026-01-28 01:07:11.809860 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.809879 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.809898 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.809929 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.809949 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.809968 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.809987 | orchestrator | 2026-01-28 01:07:11.810005 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-01-28 01:07:11.810095 | orchestrator | Wednesday 28 January 2026 01:03:48 +0000 (0:00:03.338) 0:01:00.409 ***** 2026-01-28 01:07:11.810121 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.810141 | orchestrator | 2026-01-28 01:07:11.810161 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-01-28 01:07:11.810180 | orchestrator | Wednesday 28 January 2026 01:03:48 +0000 (0:00:00.089) 0:01:00.498 ***** 2026-01-28 01:07:11.810201 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.810222 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.810241 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.810261 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.810280 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.810299 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.810319 | orchestrator | 2026-01-28 01:07:11.810341 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-01-28 01:07:11.810363 | orchestrator | Wednesday 28 January 2026 01:03:49 +0000 (0:00:00.954) 0:01:01.453 ***** 2026-01-28 01:07:11.810386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.810410 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.810434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.810448 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.810467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.810492 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.810688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.810815 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.810836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.810849 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.810861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.810873 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.810884 | orchestrator | 2026-01-28 01:07:11.810896 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-01-28 01:07:11.810908 | orchestrator | Wednesday 28 January 2026 01:03:52 +0000 (0:00:03.303) 0:01:04.756 ***** 2026-01-28 01:07:11.810930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 01:07:11.811012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 01:07:11.811027 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-28 01:07:11.811039 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-28 01:07:11.811051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 01:07:11.811062 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-28 01:07:11.811081 | orchestrator | 2026-01-28 01:07:11.811093 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-01-28 01:07:11.811109 | orchestrator | Wednesday 28 January 2026 01:03:57 +0000 (0:00:04.870) 0:01:09.627 ***** 2026-01-28 01:07:11.811127 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-28 01:07:11.811139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 01:07:11.811151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 01:07:11.811163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 01:07:11.811179 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-28 01:07:11.811203 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-28 01:07:11.811214 | orchestrator | 2026-01-28 01:07:11.811227 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-01-28 01:07:11.811240 | orchestrator | Wednesday 28 January 2026 01:04:03 +0000 (0:00:05.798) 0:01:15.425 ***** 2026-01-28 01:07:11.811253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.811266 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.811279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.811291 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.811304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.811323 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.811341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.811355 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.811375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.811389 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.811402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.811415 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.811427 | orchestrator | 2026-01-28 01:07:11.811439 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-01-28 01:07:11.811452 | orchestrator | Wednesday 28 January 2026 01:04:05 +0000 (0:00:02.714) 0:01:18.140 ***** 2026-01-28 01:07:11.811465 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:07:11.811478 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.811490 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.811502 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:07:11.811515 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.811528 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:07:11.811541 | orchestrator | 2026-01-28 01:07:11.811553 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-01-28 01:07:11.811565 | orchestrator | Wednesday 28 January 2026 01:04:09 +0000 (0:00:03.459) 0:01:21.599 ***** 2026-01-28 01:07:11.811585 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.811599 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.811619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.811639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.811651 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.811662 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.811673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 01:07:11.811685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 01:07:11.811702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 01:07:11.811714 | orchestrator | 2026-01-28 01:07:11.811725 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-01-28 01:07:11.811736 | orchestrator | Wednesday 28 January 2026 01:04:13 +0000 (0:00:04.366) 0:01:25.966 ***** 2026-01-28 01:07:11.811747 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.811782 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.811799 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.811810 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.811821 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.811831 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.811842 | orchestrator | 2026-01-28 01:07:11.811853 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-01-28 01:07:11.811864 | orchestrator | Wednesday 28 January 2026 01:04:16 +0000 (0:00:02.549) 0:01:28.516 ***** 2026-01-28 01:07:11.811875 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.811886 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.811896 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.811907 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.811918 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.811929 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.811940 | orchestrator | 2026-01-28 01:07:11.811951 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-01-28 01:07:11.811961 | orchestrator | Wednesday 28 January 2026 01:04:18 +0000 (0:00:02.566) 0:01:31.082 ***** 2026-01-28 01:07:11.811979 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.811991 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.812001 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.812012 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.812023 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.812034 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.812045 | orchestrator | 2026-01-28 01:07:11.812056 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-01-28 01:07:11.812067 | orchestrator | Wednesday 28 January 2026 01:04:21 +0000 (0:00:02.848) 0:01:33.930 ***** 2026-01-28 01:07:11.812078 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.812089 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.812100 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.812111 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.812122 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.812132 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.812143 | orchestrator | 2026-01-28 01:07:11.812154 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-01-28 01:07:11.812165 | orchestrator | Wednesday 28 January 2026 01:04:23 +0000 (0:00:02.008) 0:01:35.939 ***** 2026-01-28 01:07:11.812183 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.812194 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.812205 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.812215 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.812226 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.812237 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.812248 | orchestrator | 2026-01-28 01:07:11.812259 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-01-28 01:07:11.812270 | orchestrator | Wednesday 28 January 2026 01:04:26 +0000 (0:00:02.549) 0:01:38.488 ***** 2026-01-28 01:07:11.812281 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.812291 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.812302 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.812313 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.812324 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.812334 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.812345 | orchestrator | 2026-01-28 01:07:11.812356 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-01-28 01:07:11.812367 | orchestrator | Wednesday 28 January 2026 01:04:29 +0000 (0:00:03.508) 0:01:41.996 ***** 2026-01-28 01:07:11.812378 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-28 01:07:11.812390 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.812401 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-28 01:07:11.812412 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.812423 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-28 01:07:11.812434 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.812444 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-28 01:07:11.812455 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.812466 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-28 01:07:11.812477 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.812488 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-01-28 01:07:11.812498 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.812510 | orchestrator | 2026-01-28 01:07:11.812521 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-01-28 01:07:11.812532 | orchestrator | Wednesday 28 January 2026 01:04:31 +0000 (0:00:02.035) 0:01:44.032 ***** 2026-01-28 01:07:11.812548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.812560 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.812578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.812603 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.812614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.812626 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.812637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.812648 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.812659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.812671 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.812687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.812707 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.812718 | orchestrator | 2026-01-28 01:07:11.812729 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-01-28 01:07:11.812740 | orchestrator | Wednesday 28 January 2026 01:04:33 +0000 (0:00:02.294) 0:01:46.326 ***** 2026-01-28 01:07:11.813041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.813060 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.813072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.813083 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.813094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.813105 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.813117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.813144 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.813162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.813173 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.813185 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.813196 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.813207 | orchestrator | 2026-01-28 01:07:11.813218 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-01-28 01:07:11.813228 | orchestrator | Wednesday 28 January 2026 01:04:36 +0000 (0:00:02.466) 0:01:48.792 ***** 2026-01-28 01:07:11.813239 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.813250 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.813261 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.813272 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.813282 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.813293 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.813304 | orchestrator | 2026-01-28 01:07:11.813315 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-01-28 01:07:11.813326 | orchestrator | Wednesday 28 January 2026 01:04:39 +0000 (0:00:03.143) 0:01:51.936 ***** 2026-01-28 01:07:11.813336 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.813347 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.813358 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.813368 | orchestrator | changed: [testbed-node-3] 2026-01-28 01:07:11.813379 | orchestrator | changed: [testbed-node-5] 2026-01-28 01:07:11.813390 | orchestrator | changed: [testbed-node-4] 2026-01-28 01:07:11.813400 | orchestrator | 2026-01-28 01:07:11.813411 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-01-28 01:07:11.813422 | orchestrator | Wednesday 28 January 2026 01:04:44 +0000 (0:00:05.056) 0:01:56.992 ***** 2026-01-28 01:07:11.813433 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.813443 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.813478 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.813489 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.813500 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.813511 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.813522 | orchestrator | 2026-01-28 01:07:11.813533 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-01-28 01:07:11.813544 | orchestrator | Wednesday 28 January 2026 01:04:46 +0000 (0:00:01.910) 0:01:58.902 ***** 2026-01-28 01:07:11.813561 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.813572 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.813582 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.813593 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.813604 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.813615 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.813626 | orchestrator | 2026-01-28 01:07:11.813637 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-01-28 01:07:11.813648 | orchestrator | Wednesday 28 January 2026 01:04:49 +0000 (0:00:02.686) 0:02:01.589 ***** 2026-01-28 01:07:11.813658 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.813669 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.813680 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.813691 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.813702 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.813712 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.813723 | orchestrator | 2026-01-28 01:07:11.813734 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-01-28 01:07:11.813745 | orchestrator | Wednesday 28 January 2026 01:04:52 +0000 (0:00:03.506) 0:02:05.095 ***** 2026-01-28 01:07:11.813797 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.813810 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.813821 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.813831 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.813842 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.813853 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.813863 | orchestrator | 2026-01-28 01:07:11.813874 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-01-28 01:07:11.813891 | orchestrator | Wednesday 28 January 2026 01:04:54 +0000 (0:00:01.950) 0:02:07.045 ***** 2026-01-28 01:07:11.813902 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.813913 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.813924 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.813934 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.813945 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.813956 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.813967 | orchestrator | 2026-01-28 01:07:11.813978 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-01-28 01:07:11.813989 | orchestrator | Wednesday 28 January 2026 01:04:57 +0000 (0:00:02.968) 0:02:10.014 ***** 2026-01-28 01:07:11.813999 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.814010 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.814070 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.814082 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.814093 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.814104 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.814115 | orchestrator | 2026-01-28 01:07:11.814126 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-01-28 01:07:11.814145 | orchestrator | Wednesday 28 January 2026 01:04:59 +0000 (0:00:02.244) 0:02:12.259 ***** 2026-01-28 01:07:11.814156 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.814166 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.814177 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.814188 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.814199 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.814209 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.814220 | orchestrator | 2026-01-28 01:07:11.814231 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-01-28 01:07:11.814242 | orchestrator | Wednesday 28 January 2026 01:05:01 +0000 (0:00:01.764) 0:02:14.023 ***** 2026-01-28 01:07:11.814253 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-28 01:07:11.814273 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.814284 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-28 01:07:11.814295 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.814306 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-28 01:07:11.814317 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.814328 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-28 01:07:11.814339 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.814350 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-28 01:07:11.814361 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.814372 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-01-28 01:07:11.814383 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.814394 | orchestrator | 2026-01-28 01:07:11.814405 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-01-28 01:07:11.814416 | orchestrator | Wednesday 28 January 2026 01:05:03 +0000 (0:00:01.911) 0:02:15.935 ***** 2026-01-28 01:07:11.814427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.814439 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.814450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.814467 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.814486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-01-28 01:07:11.814505 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.814517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.814528 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.814540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.814551 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.814562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-01-28 01:07:11.814574 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.814585 | orchestrator | 2026-01-28 01:07:11.814596 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-01-28 01:07:11.814606 | orchestrator | Wednesday 28 January 2026 01:05:05 +0000 (0:00:01.804) 0:02:17.739 ***** 2026-01-28 01:07:11.814623 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-28 01:07:11.814642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 01:07:11.814661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 01:07:11.814673 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-28 01:07:11.814685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-01-28 01:07:11.814701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-01-28 01:07:11.814719 | orchestrator | 2026-01-28 01:07:11.814730 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-01-28 01:07:11.814741 | orchestrator | Wednesday 28 January 2026 01:05:09 +0000 (0:00:03.818) 0:02:21.557 ***** 2026-01-28 01:07:11.814780 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:07:11.814799 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:07:11.814821 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:07:11.814849 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:07:11.814867 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:07:11.814894 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:07:11.814911 | orchestrator | 2026-01-28 01:07:11.814930 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-01-28 01:07:11.814948 | orchestrator | Wednesday 28 January 2026 01:05:09 +0000 (0:00:00.497) 0:02:22.055 ***** 2026-01-28 01:07:11.814967 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:07:11.814986 | orchestrator | 2026-01-28 01:07:11.815004 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-01-28 01:07:11.815105 | orchestrator | Wednesday 28 January 2026 01:05:11 +0000 (0:00:02.131) 0:02:24.186 ***** 2026-01-28 01:07:11.815119 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:07:11.815164 | orchestrator | 2026-01-28 01:07:11.815175 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-01-28 01:07:11.815186 | orchestrator | Wednesday 28 January 2026 01:05:13 +0000 (0:00:01.929) 0:02:26.116 ***** 2026-01-28 01:07:11.815197 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:07:11.815208 | orchestrator | 2026-01-28 01:07:11.815219 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-28 01:07:11.815230 | orchestrator | Wednesday 28 January 2026 01:05:51 +0000 (0:00:37.746) 0:03:03.863 ***** 2026-01-28 01:07:11.815266 | orchestrator | 2026-01-28 01:07:11.815278 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-28 01:07:11.815289 | orchestrator | Wednesday 28 January 2026 01:05:51 +0000 (0:00:00.205) 0:03:04.068 ***** 2026-01-28 01:07:11.815300 | orchestrator | 2026-01-28 01:07:11.815311 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-28 01:07:11.815322 | orchestrator | Wednesday 28 January 2026 01:05:52 +0000 (0:00:00.937) 0:03:05.006 ***** 2026-01-28 01:07:11.815333 | orchestrator | 2026-01-28 01:07:11.815344 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-28 01:07:11.815355 | orchestrator | Wednesday 28 January 2026 01:05:52 +0000 (0:00:00.238) 0:03:05.244 ***** 2026-01-28 01:07:11.815366 | orchestrator | 2026-01-28 01:07:11.815377 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-28 01:07:11.815388 | orchestrator | Wednesday 28 January 2026 01:05:53 +0000 (0:00:00.169) 0:03:05.414 ***** 2026-01-28 01:07:11.815399 | orchestrator | 2026-01-28 01:07:11.815409 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-01-28 01:07:11.815420 | orchestrator | Wednesday 28 January 2026 01:05:53 +0000 (0:00:00.127) 0:03:05.542 ***** 2026-01-28 01:07:11.815431 | orchestrator | 2026-01-28 01:07:11.815442 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-01-28 01:07:11.815453 | orchestrator | Wednesday 28 January 2026 01:05:53 +0000 (0:00:00.259) 0:03:05.802 ***** 2026-01-28 01:07:11.815464 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:07:11.815475 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:07:11.815486 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:07:11.815497 | orchestrator | 2026-01-28 01:07:11.815507 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-01-28 01:07:11.815518 | orchestrator | Wednesday 28 January 2026 01:06:16 +0000 (0:00:23.154) 0:03:28.957 ***** 2026-01-28 01:07:11.815529 | orchestrator | changed: [testbed-node-4] 2026-01-28 01:07:11.815540 | orchestrator | changed: [testbed-node-3] 2026-01-28 01:07:11.815551 | orchestrator | changed: [testbed-node-5] 2026-01-28 01:07:11.815574 | orchestrator | 2026-01-28 01:07:11.815585 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:07:11.815597 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-28 01:07:11.815610 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-28 01:07:11.815621 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-01-28 01:07:11.815632 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-28 01:07:11.815643 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-28 01:07:11.815654 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-01-28 01:07:11.815665 | orchestrator | 2026-01-28 01:07:11.815675 | orchestrator | 2026-01-28 01:07:11.815687 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:07:11.815697 | orchestrator | Wednesday 28 January 2026 01:07:08 +0000 (0:00:52.046) 0:04:21.003 ***** 2026-01-28 01:07:11.815721 | orchestrator | =============================================================================== 2026-01-28 01:07:11.815732 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 52.05s 2026-01-28 01:07:11.815743 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 37.75s 2026-01-28 01:07:11.815776 | orchestrator | neutron : Restart neutron-server container ----------------------------- 23.15s 2026-01-28 01:07:11.815797 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.34s 2026-01-28 01:07:11.815808 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.80s 2026-01-28 01:07:11.815819 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 5.72s 2026-01-28 01:07:11.815830 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.06s 2026-01-28 01:07:11.815841 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.87s 2026-01-28 01:07:11.815860 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.37s 2026-01-28 01:07:11.815871 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.07s 2026-01-28 01:07:11.815882 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.82s 2026-01-28 01:07:11.815892 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.79s 2026-01-28 01:07:11.815903 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.79s 2026-01-28 01:07:11.815914 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 3.51s 2026-01-28 01:07:11.815925 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.51s 2026-01-28 01:07:11.815935 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.48s 2026-01-28 01:07:11.815946 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.46s 2026-01-28 01:07:11.815957 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.43s 2026-01-28 01:07:11.815968 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.36s 2026-01-28 01:07:11.816027 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.34s 2026-01-28 01:07:11.816047 | orchestrator | 2026-01-28 01:07:11 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:11.816065 | orchestrator | 2026-01-28 01:07:11 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:11.816095 | orchestrator | 2026-01-28 01:07:11 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:07:11.816166 | orchestrator | 2026-01-28 01:07:11 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:14.844657 | orchestrator | 2026-01-28 01:07:14 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:14.844740 | orchestrator | 2026-01-28 01:07:14 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:14.844826 | orchestrator | 2026-01-28 01:07:14 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:14.844838 | orchestrator | 2026-01-28 01:07:14 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:07:14.844849 | orchestrator | 2026-01-28 01:07:14 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:17.895281 | orchestrator | 2026-01-28 01:07:17 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:17.895982 | orchestrator | 2026-01-28 01:07:17 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:17.896602 | orchestrator | 2026-01-28 01:07:17 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:17.897298 | orchestrator | 2026-01-28 01:07:17 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:07:17.898229 | orchestrator | 2026-01-28 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:20.932130 | orchestrator | 2026-01-28 01:07:20 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:20.932241 | orchestrator | 2026-01-28 01:07:20 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:20.933058 | orchestrator | 2026-01-28 01:07:20 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:20.933711 | orchestrator | 2026-01-28 01:07:20 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:07:20.933733 | orchestrator | 2026-01-28 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:23.963562 | orchestrator | 2026-01-28 01:07:23 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:23.963960 | orchestrator | 2026-01-28 01:07:23 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:23.965440 | orchestrator | 2026-01-28 01:07:23 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:23.966222 | orchestrator | 2026-01-28 01:07:23 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:07:23.966261 | orchestrator | 2026-01-28 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:26.996229 | orchestrator | 2026-01-28 01:07:26 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:26.996839 | orchestrator | 2026-01-28 01:07:26 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:26.998095 | orchestrator | 2026-01-28 01:07:26 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:26.999310 | orchestrator | 2026-01-28 01:07:26 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:07:26.999485 | orchestrator | 2026-01-28 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:30.028062 | orchestrator | 2026-01-28 01:07:30 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:30.029279 | orchestrator | 2026-01-28 01:07:30 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:30.030419 | orchestrator | 2026-01-28 01:07:30 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:30.032499 | orchestrator | 2026-01-28 01:07:30 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:07:30.032556 | orchestrator | 2026-01-28 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:33.065871 | orchestrator | 2026-01-28 01:07:33 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:33.066252 | orchestrator | 2026-01-28 01:07:33 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:33.067117 | orchestrator | 2026-01-28 01:07:33 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:33.067783 | orchestrator | 2026-01-28 01:07:33 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:07:33.067822 | orchestrator | 2026-01-28 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:36.101515 | orchestrator | 2026-01-28 01:07:36 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:36.101636 | orchestrator | 2026-01-28 01:07:36 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:36.102410 | orchestrator | 2026-01-28 01:07:36 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:36.103886 | orchestrator | 2026-01-28 01:07:36 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:07:36.103955 | orchestrator | 2026-01-28 01:07:36 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:39.130151 | orchestrator | 2026-01-28 01:07:39 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:39.130269 | orchestrator | 2026-01-28 01:07:39 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:39.131814 | orchestrator | 2026-01-28 01:07:39 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:39.132440 | orchestrator | 2026-01-28 01:07:39 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:07:39.132456 | orchestrator | 2026-01-28 01:07:39 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:42.186324 | orchestrator | 2026-01-28 01:07:42 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:42.186416 | orchestrator | 2026-01-28 01:07:42 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:42.186846 | orchestrator | 2026-01-28 01:07:42 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:42.187244 | orchestrator | 2026-01-28 01:07:42 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:07:42.187466 | orchestrator | 2026-01-28 01:07:42 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:45.218809 | orchestrator | 2026-01-28 01:07:45 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:45.221083 | orchestrator | 2026-01-28 01:07:45 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:45.222669 | orchestrator | 2026-01-28 01:07:45 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:45.224468 | orchestrator | 2026-01-28 01:07:45 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:07:45.224511 | orchestrator | 2026-01-28 01:07:45 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:48.272071 | orchestrator | 2026-01-28 01:07:48 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:48.272169 | orchestrator | 2026-01-28 01:07:48 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:48.272569 | orchestrator | 2026-01-28 01:07:48 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:48.273631 | orchestrator | 2026-01-28 01:07:48 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:07:48.273668 | orchestrator | 2026-01-28 01:07:48 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:51.313431 | orchestrator | 2026-01-28 01:07:51 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:51.314987 | orchestrator | 2026-01-28 01:07:51 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:51.315664 | orchestrator | 2026-01-28 01:07:51 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:51.318340 | orchestrator | 2026-01-28 01:07:51 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:07:51.318423 | orchestrator | 2026-01-28 01:07:51 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:54.353901 | orchestrator | 2026-01-28 01:07:54 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:54.354095 | orchestrator | 2026-01-28 01:07:54 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:54.354254 | orchestrator | 2026-01-28 01:07:54 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:54.354930 | orchestrator | 2026-01-28 01:07:54 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:07:54.354970 | orchestrator | 2026-01-28 01:07:54 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:07:57.408308 | orchestrator | 2026-01-28 01:07:57 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:07:57.408429 | orchestrator | 2026-01-28 01:07:57 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:07:57.408449 | orchestrator | 2026-01-28 01:07:57 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:07:57.408464 | orchestrator | 2026-01-28 01:07:57 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:07:57.408478 | orchestrator | 2026-01-28 01:07:57 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:00.425451 | orchestrator | 2026-01-28 01:08:00 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:00.425557 | orchestrator | 2026-01-28 01:08:00 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:08:00.425828 | orchestrator | 2026-01-28 01:08:00 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:00.426461 | orchestrator | 2026-01-28 01:08:00 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:00.426484 | orchestrator | 2026-01-28 01:08:00 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:03.464112 | orchestrator | 2026-01-28 01:08:03 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:03.465915 | orchestrator | 2026-01-28 01:08:03 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:08:03.468212 | orchestrator | 2026-01-28 01:08:03 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:03.469747 | orchestrator | 2026-01-28 01:08:03 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:03.470214 | orchestrator | 2026-01-28 01:08:03 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:06.499820 | orchestrator | 2026-01-28 01:08:06 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:06.503704 | orchestrator | 2026-01-28 01:08:06 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:08:06.506166 | orchestrator | 2026-01-28 01:08:06 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:06.507410 | orchestrator | 2026-01-28 01:08:06 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:06.507435 | orchestrator | 2026-01-28 01:08:06 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:09.553463 | orchestrator | 2026-01-28 01:08:09 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:09.555413 | orchestrator | 2026-01-28 01:08:09 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:08:09.558149 | orchestrator | 2026-01-28 01:08:09 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:09.562173 | orchestrator | 2026-01-28 01:08:09 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:09.562211 | orchestrator | 2026-01-28 01:08:09 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:12.637005 | orchestrator | 2026-01-28 01:08:12 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:12.639488 | orchestrator | 2026-01-28 01:08:12 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:08:12.640272 | orchestrator | 2026-01-28 01:08:12 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:12.641202 | orchestrator | 2026-01-28 01:08:12 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:12.641227 | orchestrator | 2026-01-28 01:08:12 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:15.684750 | orchestrator | 2026-01-28 01:08:15 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:15.686637 | orchestrator | 2026-01-28 01:08:15 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:08:15.688405 | orchestrator | 2026-01-28 01:08:15 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:15.690307 | orchestrator | 2026-01-28 01:08:15 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:15.690373 | orchestrator | 2026-01-28 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:18.742312 | orchestrator | 2026-01-28 01:08:18 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:18.743394 | orchestrator | 2026-01-28 01:08:18 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state STARTED 2026-01-28 01:08:18.746940 | orchestrator | 2026-01-28 01:08:18 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:18.750097 | orchestrator | 2026-01-28 01:08:18 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:18.750434 | orchestrator | 2026-01-28 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:21.799474 | orchestrator | 2026-01-28 01:08:21 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:21.804585 | orchestrator | 2026-01-28 01:08:21 | INFO  | Task 4eec2e12-b453-4451-a64b-8d09268f6b88 is in state SUCCESS 2026-01-28 01:08:21.807312 | orchestrator | 2026-01-28 01:08:21.807390 | orchestrator | 2026-01-28 01:08:21.807404 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 01:08:21.807416 | orchestrator | 2026-01-28 01:08:21.807426 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 01:08:21.807437 | orchestrator | Wednesday 28 January 2026 01:05:21 +0000 (0:00:00.251) 0:00:00.251 ***** 2026-01-28 01:08:21.807447 | orchestrator | ok: [testbed-manager] 2026-01-28 01:08:21.807458 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:08:21.807467 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:08:21.807477 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:08:21.807486 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:08:21.807496 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:08:21.807506 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:08:21.807606 | orchestrator | 2026-01-28 01:08:21.807625 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 01:08:21.807828 | orchestrator | Wednesday 28 January 2026 01:05:22 +0000 (0:00:00.702) 0:00:00.954 ***** 2026-01-28 01:08:21.807842 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-01-28 01:08:21.807852 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-01-28 01:08:21.807862 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-01-28 01:08:21.807874 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-01-28 01:08:21.807886 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-01-28 01:08:21.807897 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-01-28 01:08:21.807913 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-01-28 01:08:21.807931 | orchestrator | 2026-01-28 01:08:21.807949 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-01-28 01:08:21.807967 | orchestrator | 2026-01-28 01:08:21.807987 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-28 01:08:21.808006 | orchestrator | Wednesday 28 January 2026 01:05:22 +0000 (0:00:00.630) 0:00:01.585 ***** 2026-01-28 01:08:21.808046 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 01:08:21.808064 | orchestrator | 2026-01-28 01:08:21.808076 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-01-28 01:08:21.808087 | orchestrator | Wednesday 28 January 2026 01:05:24 +0000 (0:00:01.475) 0:00:03.060 ***** 2026-01-28 01:08:21.808103 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-28 01:08:21.808120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.808151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.808203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.808236 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.808250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.808268 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.808279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.808298 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.808316 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.808345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.808406 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.808425 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.808444 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-28 01:08:21.808457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.808468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.808485 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.808496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.808514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.808524 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.808534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.808548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.808559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.808570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.808587 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.808598 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.808616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.808626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.808641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.808653 | orchestrator | 2026-01-28 01:08:21.808670 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-01-28 01:08:21.808687 | orchestrator | Wednesday 28 January 2026 01:05:26 +0000 (0:00:02.558) 0:00:05.619 ***** 2026-01-28 01:08:21.808725 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 01:08:21.808842 | orchestrator | 2026-01-28 01:08:21.808854 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-01-28 01:08:21.808876 | orchestrator | Wednesday 28 January 2026 01:05:28 +0000 (0:00:01.429) 0:00:07.049 ***** 2026-01-28 01:08:21.808887 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-28 01:08:21.808907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.808918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.808937 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.808947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.808958 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.808973 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.808984 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.809002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.809012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.809023 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.809040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.809059 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.809092 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.809111 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.809140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.809161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.809180 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.809198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.809218 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.809229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.809245 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-28 01:08:21.809264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.809274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.809284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.809946 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.809979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.809990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.810000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.810060 | orchestrator | 2026-01-28 01:08:21.810074 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-01-28 01:08:21.810084 | orchestrator | Wednesday 28 January 2026 01:05:33 +0000 (0:00:04.657) 0:00:11.706 ***** 2026-01-28 01:08:21.810095 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-28 01:08:21.810105 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 01:08:21.810149 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.810179 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-28 01:08:21.810200 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.810237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 01:08:21.810256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.810274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.810293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.810310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.810337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 01:08:21.810359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.810390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.810406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.810417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.810427 | orchestrator | skipping: [testbed-manager] 2026-01-28 01:08:21.810438 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:08:21.810448 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:08:21.810458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 01:08:21.810468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.810478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.810528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.810552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.810597 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:08:21.810644 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 01:08:21.810664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.810681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.810691 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:08:21.810701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 01:08:21.810785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.810806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.810825 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:08:21.810903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 01:08:21.810929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.810946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.810962 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:08:21.810980 | orchestrator | 2026-01-28 01:08:21.810996 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-01-28 01:08:21.811013 | orchestrator | Wednesday 28 January 2026 01:05:34 +0000 (0:00:01.625) 0:00:13.332 ***** 2026-01-28 01:08:21.811031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 01:08:21.811049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.811060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.811079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.811098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.811108 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:08:21.811128 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-01-28 01:08:21.811139 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 01:08:21.811150 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.811161 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-01-28 01:08:21.811176 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.811192 | orchestrator | skipping: [testbed-manager] 2026-01-28 01:08:21.811203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 01:08:21.811213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.811227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.811237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.811247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.811257 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:08:21.811267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 01:08:21.811277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.811310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.811328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.811345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-01-28 01:08:21.811362 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:08:21.811385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 01:08:21.811405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 01:08:21.811422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.811437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.811455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.811472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.811482 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:08:21.811491 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:08:21.811502 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-01-28 01:08:21.811516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.811527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-01-28 01:08:21.811537 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:08:21.811546 | orchestrator | 2026-01-28 01:08:21.811556 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-01-28 01:08:21.811566 | orchestrator | Wednesday 28 January 2026 01:05:36 +0000 (0:00:02.209) 0:00:15.541 ***** 2026-01-28 01:08:21.811576 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-28 01:08:21.811592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.811608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.811618 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.811628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.811643 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.811654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.811672 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.811697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.811857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.811876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.811887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.811897 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.811913 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.811923 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.811934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.811952 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.811962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.811979 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-28 01:08:21.811995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.812005 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.812015 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.812031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.812046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.812064 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.812089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.812108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.812133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.812154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.812184 | orchestrator | 2026-01-28 01:08:21.812194 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-01-28 01:08:21.812204 | orchestrator | Wednesday 28 January 2026 01:05:42 +0000 (0:00:05.836) 0:00:21.378 ***** 2026-01-28 01:08:21.812214 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-28 01:08:21.812224 | orchestrator | 2026-01-28 01:08:21.812233 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-01-28 01:08:21.812243 | orchestrator | Wednesday 28 January 2026 01:05:43 +0000 (0:00:01.206) 0:00:22.585 ***** 2026-01-28 01:08:21.812253 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1331386, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3983414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812264 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1331386, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3983414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812282 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1331386, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3983414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 01:08:21.812293 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1331386, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3983414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812303 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1331386, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3983414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812316 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1331424, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.402739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812333 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1331386, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3983414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812343 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1331424, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.402739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812353 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1331424, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.402739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812368 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1331424, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.402739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812378 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1331372, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.397466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812388 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1331386, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3983414, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812402 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1331372, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.397466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812428 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1331424, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.402739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812445 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1331372, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.397466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812462 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1331409, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4013093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812487 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1331409, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4013093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812507 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1331372, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.397466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812519 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1331424, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.402739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 01:08:21.812534 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1331424, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.402739, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812551 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1331372, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.397466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812562 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1331409, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4013093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812572 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1331367, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3955567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812891 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1331367, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3955567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812920 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1331409, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4013093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812930 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1331367, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3955567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812957 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1331409, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4013093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812968 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1331387, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3986883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.812978 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1331372, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.397466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 01:08:21.812988 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1331372, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.397466, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813006 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1331387, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3986883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813017 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1331367, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3955567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813027 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1331367, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3955567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813068 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1331387, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3986883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813079 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1331404, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4005473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813089 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1331387, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3986883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813099 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1331409, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4013093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813110 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1331404, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4005473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813125 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1331404, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4005473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813136 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1331387, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3986883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813167 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1331391, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3989992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813185 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1331391, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3989992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813202 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1331404, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4005473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813290 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1331409, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4013093, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 01:08:21.813304 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1331391, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3989992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813321 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1331381, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3979871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813332 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1331367, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3955567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813358 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1331391, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3989992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813369 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1331404, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4005473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813379 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1331381, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3979871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813389 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331423, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4024053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813399 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1331381, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3979871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813414 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1331387, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3986883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813435 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1331391, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3989992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813449 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1331381, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3979871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813459 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331423, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4024053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813469 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331360, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3944232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813479 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1331367, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3955567, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 01:08:21.813491 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331423, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4024053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813509 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331423, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4024053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813536 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1331404, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4005473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813559 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1331444, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.405146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813578 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331360, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3944232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813595 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331360, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3944232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.813614 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1331381, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3979871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814000 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1331387, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3986883, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 01:08:21.814133 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331360, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3944232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814157 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1331444, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.405146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814173 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1331391, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3989992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814181 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1331444, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.405146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814263 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1331418, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4021568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814534 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331423, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4024053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814549 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1331444, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.405146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814595 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1331381, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3979871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814605 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331360, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3944232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814620 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331369, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3958697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814628 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1331418, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4021568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814636 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331423, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4024053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814644 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1331404, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4005473, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 01:08:21.814652 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1331418, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4021568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814680 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331360, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3944232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814694 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1331418, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4021568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814732 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1331444, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.405146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814747 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331369, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3958697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814761 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1331363, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3948724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814776 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331369, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3958697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814787 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1331444, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.405146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814814 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331369, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3958697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814823 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1331418, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4021568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814835 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1331400, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4000697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814844 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1331393, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3996553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814852 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1331391, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3989992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 01:08:21.814860 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1331363, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3948724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814873 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1331363, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3948724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814888 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1331400, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4000697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814897 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1331363, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3948724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814909 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331369, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3958697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814918 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1331418, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4021568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814926 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1331393, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3996553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814934 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1331438, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4048235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814948 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:08:21.814957 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1331400, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4000697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814970 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1331393, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3996553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814978 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1331438, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4048235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.814986 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:08:21.815001 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1331363, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3948724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.815010 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1331381, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3979871, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 01:08:21.815018 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1331438, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4048235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.815026 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1331400, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4000697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.815039 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:08:21.815051 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331369, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3958697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.815072 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1331400, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4000697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.815086 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1331393, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3996553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.815104 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1331363, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3948724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.815120 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1331393, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3996553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.815133 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1331438, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4048235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.815148 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:08:21.815158 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1331438, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4048235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.815168 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:08:21.815177 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1331400, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4000697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.815193 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331423, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4024053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 01:08:21.815219 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1331393, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3996553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.815233 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1331438, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4048235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-01-28 01:08:21.815242 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:08:21.815252 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331360, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3944232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 01:08:21.815266 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1331444, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.405146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 01:08:21.815284 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1331418, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4021568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 01:08:21.815295 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1331369, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3958697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 01:08:21.815309 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1331363, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3948724, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 01:08:21.815319 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1331400, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4000697, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 01:08:21.815333 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1331393, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3996553, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 01:08:21.815342 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1331438, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.4048235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-01-28 01:08:21.815356 | orchestrator | 2026-01-28 01:08:21.815366 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-01-28 01:08:21.815376 | orchestrator | Wednesday 28 January 2026 01:06:11 +0000 (0:00:27.705) 0:00:50.291 ***** 2026-01-28 01:08:21.815385 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-28 01:08:21.815394 | orchestrator | 2026-01-28 01:08:21.815408 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-01-28 01:08:21.815422 | orchestrator | Wednesday 28 January 2026 01:06:12 +0000 (0:00:00.731) 0:00:51.022 ***** 2026-01-28 01:08:21.815436 | orchestrator | [WARNING]: Skipped 2026-01-28 01:08:21.815450 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-28 01:08:21.815465 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-01-28 01:08:21.815480 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-28 01:08:21.815494 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-01-28 01:08:21.815504 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-28 01:08:21.815514 | orchestrator | [WARNING]: Skipped 2026-01-28 01:08:21.815522 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-28 01:08:21.815530 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-01-28 01:08:21.815538 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-28 01:08:21.815546 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-01-28 01:08:21.815555 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-28 01:08:21.815563 | orchestrator | [WARNING]: Skipped 2026-01-28 01:08:21.815570 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-28 01:08:21.815578 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-01-28 01:08:21.815586 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-28 01:08:21.815594 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-01-28 01:08:21.815602 | orchestrator | [WARNING]: Skipped 2026-01-28 01:08:21.815610 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-28 01:08:21.815618 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-01-28 01:08:21.815625 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-28 01:08:21.815633 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-01-28 01:08:21.815641 | orchestrator | [WARNING]: Skipped 2026-01-28 01:08:21.815649 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-28 01:08:21.815662 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-01-28 01:08:21.815671 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-28 01:08:21.815679 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-01-28 01:08:21.815687 | orchestrator | [WARNING]: Skipped 2026-01-28 01:08:21.815695 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-28 01:08:21.815723 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-01-28 01:08:21.815733 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-28 01:08:21.815741 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-01-28 01:08:21.815748 | orchestrator | [WARNING]: Skipped 2026-01-28 01:08:21.815758 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-28 01:08:21.815772 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-01-28 01:08:21.815785 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-01-28 01:08:21.815798 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-01-28 01:08:21.815812 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-01-28 01:08:21.815835 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-01-28 01:08:21.815849 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-28 01:08:21.815862 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-28 01:08:21.815876 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-28 01:08:21.815889 | orchestrator | 2026-01-28 01:08:21.815902 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-01-28 01:08:21.815916 | orchestrator | Wednesday 28 January 2026 01:06:14 +0000 (0:00:01.915) 0:00:52.938 ***** 2026-01-28 01:08:21.815924 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-28 01:08:21.815932 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:08:21.815945 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-28 01:08:21.815953 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:08:21.815962 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-28 01:08:21.815969 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:08:21.815977 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-28 01:08:21.815985 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:08:21.815993 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-28 01:08:21.816001 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:08:21.816009 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-01-28 01:08:21.816017 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:08:21.816025 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-01-28 01:08:21.816033 | orchestrator | 2026-01-28 01:08:21.816041 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-01-28 01:08:21.816049 | orchestrator | Wednesday 28 January 2026 01:06:31 +0000 (0:00:17.600) 0:01:10.538 ***** 2026-01-28 01:08:21.816057 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-28 01:08:21.816065 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:08:21.816073 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-28 01:08:21.816081 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:08:21.816089 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-28 01:08:21.816097 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:08:21.816106 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-28 01:08:21.816120 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:08:21.816133 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-28 01:08:21.816146 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:08:21.816160 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-01-28 01:08:21.816174 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:08:21.816188 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-01-28 01:08:21.816201 | orchestrator | 2026-01-28 01:08:21.816213 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-01-28 01:08:21.816221 | orchestrator | Wednesday 28 January 2026 01:06:34 +0000 (0:00:02.772) 0:01:13.310 ***** 2026-01-28 01:08:21.816230 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-28 01:08:21.816239 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-28 01:08:21.816247 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:08:21.816262 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-28 01:08:21.816270 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:08:21.816278 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:08:21.816292 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-01-28 01:08:21.816300 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-28 01:08:21.816308 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:08:21.816316 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-28 01:08:21.816324 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:08:21.816332 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-01-28 01:08:21.816340 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:08:21.816348 | orchestrator | 2026-01-28 01:08:21.816356 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-01-28 01:08:21.816363 | orchestrator | Wednesday 28 January 2026 01:06:37 +0000 (0:00:02.461) 0:01:15.772 ***** 2026-01-28 01:08:21.816372 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-28 01:08:21.816379 | orchestrator | 2026-01-28 01:08:21.816387 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-01-28 01:08:21.816395 | orchestrator | Wednesday 28 January 2026 01:06:37 +0000 (0:00:00.846) 0:01:16.618 ***** 2026-01-28 01:08:21.816403 | orchestrator | skipping: [testbed-manager] 2026-01-28 01:08:21.816411 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:08:21.816419 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:08:21.816427 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:08:21.816435 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:08:21.816443 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:08:21.816451 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:08:21.816463 | orchestrator | 2026-01-28 01:08:21.816477 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-01-28 01:08:21.816496 | orchestrator | Wednesday 28 January 2026 01:06:38 +0000 (0:00:00.734) 0:01:17.353 ***** 2026-01-28 01:08:21.816511 | orchestrator | skipping: [testbed-manager] 2026-01-28 01:08:21.816525 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:08:21.816539 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:08:21.816553 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:08:21.816567 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:08:21.816585 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:08:21.816603 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:08:21.816616 | orchestrator | 2026-01-28 01:08:21.816630 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-01-28 01:08:21.816642 | orchestrator | Wednesday 28 January 2026 01:06:41 +0000 (0:00:02.306) 0:01:19.659 ***** 2026-01-28 01:08:21.816654 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-28 01:08:21.816668 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-28 01:08:21.816680 | orchestrator | skipping: [testbed-manager] 2026-01-28 01:08:21.816693 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-28 01:08:21.816858 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:08:21.816887 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:08:21.816895 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-28 01:08:21.816903 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:08:21.816911 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-28 01:08:21.816930 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:08:21.816937 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-28 01:08:21.816945 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:08:21.816953 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-01-28 01:08:21.816961 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:08:21.816969 | orchestrator | 2026-01-28 01:08:21.816977 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-01-28 01:08:21.816986 | orchestrator | Wednesday 28 January 2026 01:06:42 +0000 (0:00:01.689) 0:01:21.349 ***** 2026-01-28 01:08:21.816994 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-28 01:08:21.817002 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:08:21.817010 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-28 01:08:21.817017 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:08:21.817025 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-28 01:08:21.817034 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:08:21.817041 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-28 01:08:21.817049 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:08:21.817057 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-28 01:08:21.817065 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:08:21.817073 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-01-28 01:08:21.817081 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-01-28 01:08:21.817089 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:08:21.817097 | orchestrator | 2026-01-28 01:08:21.817105 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-01-28 01:08:21.817124 | orchestrator | Wednesday 28 January 2026 01:06:44 +0000 (0:00:01.425) 0:01:22.775 ***** 2026-01-28 01:08:21.817133 | orchestrator | [WARNING]: Skipped 2026-01-28 01:08:21.817141 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-01-28 01:08:21.817149 | orchestrator | due to this access issue: 2026-01-28 01:08:21.817157 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-01-28 01:08:21.817166 | orchestrator | not a directory 2026-01-28 01:08:21.817173 | orchestrator | ok: [testbed-manager -> localhost] 2026-01-28 01:08:21.817181 | orchestrator | 2026-01-28 01:08:21.817189 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-01-28 01:08:21.817197 | orchestrator | Wednesday 28 January 2026 01:06:45 +0000 (0:00:01.179) 0:01:23.954 ***** 2026-01-28 01:08:21.817205 | orchestrator | skipping: [testbed-manager] 2026-01-28 01:08:21.817213 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:08:21.817220 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:08:21.817228 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:08:21.817235 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:08:21.817242 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:08:21.817248 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:08:21.817255 | orchestrator | 2026-01-28 01:08:21.817262 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-01-28 01:08:21.817271 | orchestrator | Wednesday 28 January 2026 01:06:46 +0000 (0:00:01.000) 0:01:24.955 ***** 2026-01-28 01:08:21.817283 | orchestrator | skipping: [testbed-manager] 2026-01-28 01:08:21.817292 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:08:21.817302 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:08:21.817313 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:08:21.817320 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:08:21.817326 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:08:21.817333 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:08:21.817340 | orchestrator | 2026-01-28 01:08:21.817347 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-01-28 01:08:21.817359 | orchestrator | Wednesday 28 January 2026 01:06:47 +0000 (0:00:00.940) 0:01:25.896 ***** 2026-01-28 01:08:21.817368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.817376 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-01-28 01:08:21.817384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.817391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.817404 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.817412 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.817426 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.817437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.817452 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-01-28 01:08:21.817459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.817466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.817473 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.817486 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.817493 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.817507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.817515 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.817522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.817529 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.817536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.817549 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-01-28 01:08:21.817569 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.817580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.817587 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.817595 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.817602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.817609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-01-28 01:08:21.817621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.817632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.817639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-01-28 01:08:21.817646 | orchestrator | 2026-01-28 01:08:21.817654 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-01-28 01:08:21.817660 | orchestrator | Wednesday 28 January 2026 01:06:51 +0000 (0:00:03.991) 0:01:29.888 ***** 2026-01-28 01:08:21.817671 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-01-28 01:08:21.817678 | orchestrator | skipping: [testbed-manager] 2026-01-28 01:08:21.817686 | orchestrator | 2026-01-28 01:08:21.817692 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-28 01:08:21.817699 | orchestrator | Wednesday 28 January 2026 01:06:52 +0000 (0:00:01.040) 0:01:30.928 ***** 2026-01-28 01:08:21.817731 | orchestrator | 2026-01-28 01:08:21.817743 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-28 01:08:21.817750 | orchestrator | Wednesday 28 January 2026 01:06:52 +0000 (0:00:00.064) 0:01:30.993 ***** 2026-01-28 01:08:21.817757 | orchestrator | 2026-01-28 01:08:21.817763 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-28 01:08:21.817770 | orchestrator | Wednesday 28 January 2026 01:06:52 +0000 (0:00:00.062) 0:01:31.055 ***** 2026-01-28 01:08:21.817777 | orchestrator | 2026-01-28 01:08:21.817783 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-28 01:08:21.817790 | orchestrator | Wednesday 28 January 2026 01:06:52 +0000 (0:00:00.062) 0:01:31.118 ***** 2026-01-28 01:08:21.817797 | orchestrator | 2026-01-28 01:08:21.817803 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-28 01:08:21.817810 | orchestrator | Wednesday 28 January 2026 01:06:52 +0000 (0:00:00.186) 0:01:31.304 ***** 2026-01-28 01:08:21.817817 | orchestrator | 2026-01-28 01:08:21.817824 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-28 01:08:21.817830 | orchestrator | Wednesday 28 January 2026 01:06:52 +0000 (0:00:00.059) 0:01:31.364 ***** 2026-01-28 01:08:21.817837 | orchestrator | 2026-01-28 01:08:21.817843 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-01-28 01:08:21.817850 | orchestrator | Wednesday 28 January 2026 01:06:52 +0000 (0:00:00.112) 0:01:31.476 ***** 2026-01-28 01:08:21.817857 | orchestrator | 2026-01-28 01:08:21.817864 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-01-28 01:08:21.817870 | orchestrator | Wednesday 28 January 2026 01:06:52 +0000 (0:00:00.081) 0:01:31.558 ***** 2026-01-28 01:08:21.817877 | orchestrator | changed: [testbed-manager] 2026-01-28 01:08:21.817883 | orchestrator | 2026-01-28 01:08:21.817890 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-01-28 01:08:21.817897 | orchestrator | Wednesday 28 January 2026 01:07:08 +0000 (0:00:15.384) 0:01:46.942 ***** 2026-01-28 01:08:21.817904 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:08:21.817910 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:08:21.817917 | orchestrator | changed: [testbed-node-4] 2026-01-28 01:08:21.817924 | orchestrator | changed: [testbed-node-5] 2026-01-28 01:08:21.817936 | orchestrator | changed: [testbed-node-3] 2026-01-28 01:08:21.817943 | orchestrator | changed: [testbed-manager] 2026-01-28 01:08:21.817949 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:08:21.817956 | orchestrator | 2026-01-28 01:08:21.817962 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-01-28 01:08:21.817969 | orchestrator | Wednesday 28 January 2026 01:07:22 +0000 (0:00:14.015) 0:02:00.958 ***** 2026-01-28 01:08:21.817976 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:08:21.817983 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:08:21.817990 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:08:21.817997 | orchestrator | 2026-01-28 01:08:21.818003 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-01-28 01:08:21.818010 | orchestrator | Wednesday 28 January 2026 01:07:33 +0000 (0:00:10.946) 0:02:11.905 ***** 2026-01-28 01:08:21.818046 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:08:21.818053 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:08:21.818060 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:08:21.818067 | orchestrator | 2026-01-28 01:08:21.818074 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-01-28 01:08:21.818080 | orchestrator | Wednesday 28 January 2026 01:07:39 +0000 (0:00:05.864) 0:02:17.769 ***** 2026-01-28 01:08:21.818088 | orchestrator | changed: [testbed-manager] 2026-01-28 01:08:21.818095 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:08:21.818101 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:08:21.818108 | orchestrator | changed: [testbed-node-5] 2026-01-28 01:08:21.818115 | orchestrator | changed: [testbed-node-4] 2026-01-28 01:08:21.818128 | orchestrator | changed: [testbed-node-3] 2026-01-28 01:08:21.818135 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:08:21.818142 | orchestrator | 2026-01-28 01:08:21.818149 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-01-28 01:08:21.818155 | orchestrator | Wednesday 28 January 2026 01:07:49 +0000 (0:00:09.991) 0:02:27.760 ***** 2026-01-28 01:08:21.818162 | orchestrator | changed: [testbed-manager] 2026-01-28 01:08:21.818169 | orchestrator | 2026-01-28 01:08:21.818176 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-01-28 01:08:21.818183 | orchestrator | Wednesday 28 January 2026 01:07:55 +0000 (0:00:06.286) 0:02:34.047 ***** 2026-01-28 01:08:21.818190 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:08:21.818197 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:08:21.818204 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:08:21.818210 | orchestrator | 2026-01-28 01:08:21.818217 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-01-28 01:08:21.818224 | orchestrator | Wednesday 28 January 2026 01:08:05 +0000 (0:00:10.278) 0:02:44.325 ***** 2026-01-28 01:08:21.818231 | orchestrator | changed: [testbed-manager] 2026-01-28 01:08:21.818237 | orchestrator | 2026-01-28 01:08:21.818244 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-01-28 01:08:21.818251 | orchestrator | Wednesday 28 January 2026 01:08:10 +0000 (0:00:04.817) 0:02:49.142 ***** 2026-01-28 01:08:21.818258 | orchestrator | changed: [testbed-node-3] 2026-01-28 01:08:21.818265 | orchestrator | changed: [testbed-node-5] 2026-01-28 01:08:21.818271 | orchestrator | changed: [testbed-node-4] 2026-01-28 01:08:21.818278 | orchestrator | 2026-01-28 01:08:21.818285 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:08:21.818292 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-01-28 01:08:21.818304 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-28 01:08:21.818311 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-28 01:08:21.818323 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-28 01:08:21.818330 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-28 01:08:21.818337 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-28 01:08:21.818344 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-28 01:08:21.818351 | orchestrator | 2026-01-28 01:08:21.818357 | orchestrator | 2026-01-28 01:08:21.818364 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:08:21.818371 | orchestrator | Wednesday 28 January 2026 01:08:21 +0000 (0:00:10.584) 0:02:59.727 ***** 2026-01-28 01:08:21.818378 | orchestrator | =============================================================================== 2026-01-28 01:08:21.818385 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 27.71s 2026-01-28 01:08:21.818392 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.60s 2026-01-28 01:08:21.818398 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 15.38s 2026-01-28 01:08:21.818405 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.02s 2026-01-28 01:08:21.818412 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.95s 2026-01-28 01:08:21.818419 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.58s 2026-01-28 01:08:21.818425 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.28s 2026-01-28 01:08:21.818432 | orchestrator | prometheus : Restart prometheus-cadvisor container ---------------------- 9.99s 2026-01-28 01:08:21.818439 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.29s 2026-01-28 01:08:21.818445 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.86s 2026-01-28 01:08:21.818452 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.84s 2026-01-28 01:08:21.818459 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.82s 2026-01-28 01:08:21.818465 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 4.66s 2026-01-28 01:08:21.818472 | orchestrator | prometheus : Check prometheus containers -------------------------------- 3.99s 2026-01-28 01:08:21.818479 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.77s 2026-01-28 01:08:21.818485 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.56s 2026-01-28 01:08:21.818492 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.46s 2026-01-28 01:08:21.818499 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.31s 2026-01-28 01:08:21.818506 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.21s 2026-01-28 01:08:21.818512 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.92s 2026-01-28 01:08:21.818523 | orchestrator | 2026-01-28 01:08:21 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:21.818530 | orchestrator | 2026-01-28 01:08:21 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:21.818537 | orchestrator | 2026-01-28 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:24.873917 | orchestrator | 2026-01-28 01:08:24 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:08:24.878982 | orchestrator | 2026-01-28 01:08:24 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:24.880376 | orchestrator | 2026-01-28 01:08:24 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:24.882598 | orchestrator | 2026-01-28 01:08:24 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:24.882911 | orchestrator | 2026-01-28 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:27.937187 | orchestrator | 2026-01-28 01:08:27 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:08:27.939743 | orchestrator | 2026-01-28 01:08:27 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:27.942099 | orchestrator | 2026-01-28 01:08:27 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:27.944255 | orchestrator | 2026-01-28 01:08:27 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:27.944312 | orchestrator | 2026-01-28 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:30.992475 | orchestrator | 2026-01-28 01:08:30 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:08:30.994618 | orchestrator | 2026-01-28 01:08:30 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:30.997248 | orchestrator | 2026-01-28 01:08:30 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:30.998677 | orchestrator | 2026-01-28 01:08:30 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:30.999042 | orchestrator | 2026-01-28 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:34.044473 | orchestrator | 2026-01-28 01:08:34 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:08:34.044810 | orchestrator | 2026-01-28 01:08:34 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:34.045432 | orchestrator | 2026-01-28 01:08:34 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:34.046413 | orchestrator | 2026-01-28 01:08:34 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:34.046439 | orchestrator | 2026-01-28 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:37.095447 | orchestrator | 2026-01-28 01:08:37 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:08:37.095546 | orchestrator | 2026-01-28 01:08:37 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:37.097258 | orchestrator | 2026-01-28 01:08:37 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:37.097332 | orchestrator | 2026-01-28 01:08:37 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:37.097344 | orchestrator | 2026-01-28 01:08:37 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:40.229572 | orchestrator | 2026-01-28 01:08:40 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:08:40.232553 | orchestrator | 2026-01-28 01:08:40 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:40.234587 | orchestrator | 2026-01-28 01:08:40 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:40.236647 | orchestrator | 2026-01-28 01:08:40 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:40.236684 | orchestrator | 2026-01-28 01:08:40 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:43.290455 | orchestrator | 2026-01-28 01:08:43 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:08:43.293048 | orchestrator | 2026-01-28 01:08:43 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:43.295106 | orchestrator | 2026-01-28 01:08:43 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:43.296674 | orchestrator | 2026-01-28 01:08:43 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:43.296755 | orchestrator | 2026-01-28 01:08:43 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:46.326509 | orchestrator | 2026-01-28 01:08:46 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:08:46.326958 | orchestrator | 2026-01-28 01:08:46 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:46.327712 | orchestrator | 2026-01-28 01:08:46 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:46.328498 | orchestrator | 2026-01-28 01:08:46 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:46.328521 | orchestrator | 2026-01-28 01:08:46 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:49.373422 | orchestrator | 2026-01-28 01:08:49 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:08:49.375482 | orchestrator | 2026-01-28 01:08:49 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:49.377092 | orchestrator | 2026-01-28 01:08:49 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:49.378912 | orchestrator | 2026-01-28 01:08:49 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:49.379042 | orchestrator | 2026-01-28 01:08:49 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:52.410084 | orchestrator | 2026-01-28 01:08:52 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:08:52.411420 | orchestrator | 2026-01-28 01:08:52 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:52.412243 | orchestrator | 2026-01-28 01:08:52 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:52.413306 | orchestrator | 2026-01-28 01:08:52 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:52.413332 | orchestrator | 2026-01-28 01:08:52 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:55.450617 | orchestrator | 2026-01-28 01:08:55 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:08:55.451111 | orchestrator | 2026-01-28 01:08:55 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:55.452406 | orchestrator | 2026-01-28 01:08:55 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:55.464270 | orchestrator | 2026-01-28 01:08:55 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:55.464343 | orchestrator | 2026-01-28 01:08:55 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:08:58.488847 | orchestrator | 2026-01-28 01:08:58 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:08:58.489902 | orchestrator | 2026-01-28 01:08:58 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:08:58.492003 | orchestrator | 2026-01-28 01:08:58 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:08:58.494222 | orchestrator | 2026-01-28 01:08:58 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:08:58.494418 | orchestrator | 2026-01-28 01:08:58 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:01.534491 | orchestrator | 2026-01-28 01:09:01 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:01.536522 | orchestrator | 2026-01-28 01:09:01 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:09:01.539851 | orchestrator | 2026-01-28 01:09:01 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:09:01.542110 | orchestrator | 2026-01-28 01:09:01 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:01.542225 | orchestrator | 2026-01-28 01:09:01 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:04.604470 | orchestrator | 2026-01-28 01:09:04 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:04.606984 | orchestrator | 2026-01-28 01:09:04 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:09:04.608574 | orchestrator | 2026-01-28 01:09:04 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state STARTED 2026-01-28 01:09:04.610781 | orchestrator | 2026-01-28 01:09:04 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:04.610831 | orchestrator | 2026-01-28 01:09:04 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:07.645704 | orchestrator | 2026-01-28 01:09:07 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:09:07.648760 | orchestrator | 2026-01-28 01:09:07 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:07.649627 | orchestrator | 2026-01-28 01:09:07 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:09:07.650999 | orchestrator | 2026-01-28 01:09:07 | INFO  | Task 39c78134-c3cf-45d3-b851-9720d95499fb is in state SUCCESS 2026-01-28 01:09:07.652721 | orchestrator | 2026-01-28 01:09:07.652774 | orchestrator | 2026-01-28 01:09:07.652795 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 01:09:07.652815 | orchestrator | 2026-01-28 01:09:07.652835 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 01:09:07.652854 | orchestrator | Wednesday 28 January 2026 01:06:22 +0000 (0:00:00.504) 0:00:00.504 ***** 2026-01-28 01:09:07.652874 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:09:07.652895 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:09:07.652914 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:09:07.652933 | orchestrator | 2026-01-28 01:09:07.652952 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 01:09:07.652972 | orchestrator | Wednesday 28 January 2026 01:06:22 +0000 (0:00:00.515) 0:00:01.019 ***** 2026-01-28 01:09:07.652991 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-01-28 01:09:07.653011 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-01-28 01:09:07.653051 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-01-28 01:09:07.653070 | orchestrator | 2026-01-28 01:09:07.653089 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-01-28 01:09:07.653105 | orchestrator | 2026-01-28 01:09:07.653121 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-28 01:09:07.653138 | orchestrator | Wednesday 28 January 2026 01:06:23 +0000 (0:00:00.591) 0:00:01.611 ***** 2026-01-28 01:09:07.653157 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:09:07.653176 | orchestrator | 2026-01-28 01:09:07.653195 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-01-28 01:09:07.653212 | orchestrator | Wednesday 28 January 2026 01:06:24 +0000 (0:00:00.839) 0:00:02.450 ***** 2026-01-28 01:09:07.653228 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-01-28 01:09:07.653279 | orchestrator | 2026-01-28 01:09:07.653299 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-01-28 01:09:07.653317 | orchestrator | Wednesday 28 January 2026 01:06:27 +0000 (0:00:03.226) 0:00:05.677 ***** 2026-01-28 01:09:07.653338 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-01-28 01:09:07.653357 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-01-28 01:09:07.653375 | orchestrator | 2026-01-28 01:09:07.653395 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-01-28 01:09:07.653413 | orchestrator | Wednesday 28 January 2026 01:06:33 +0000 (0:00:06.601) 0:00:12.278 ***** 2026-01-28 01:09:07.653430 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-28 01:09:07.653450 | orchestrator | 2026-01-28 01:09:07.653470 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-01-28 01:09:07.653490 | orchestrator | Wednesday 28 January 2026 01:06:37 +0000 (0:00:03.330) 0:00:15.608 ***** 2026-01-28 01:09:07.653509 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-28 01:09:07.653528 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-01-28 01:09:07.653547 | orchestrator | 2026-01-28 01:09:07.653566 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-01-28 01:09:07.653584 | orchestrator | Wednesday 28 January 2026 01:06:41 +0000 (0:00:03.801) 0:00:19.410 ***** 2026-01-28 01:09:07.653602 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-28 01:09:07.653621 | orchestrator | 2026-01-28 01:09:07.653640 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-01-28 01:09:07.653658 | orchestrator | Wednesday 28 January 2026 01:06:44 +0000 (0:00:03.435) 0:00:22.846 ***** 2026-01-28 01:09:07.653707 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-01-28 01:09:07.653725 | orchestrator | 2026-01-28 01:09:07.653744 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-01-28 01:09:07.653761 | orchestrator | Wednesday 28 January 2026 01:06:48 +0000 (0:00:03.821) 0:00:26.667 ***** 2026-01-28 01:09:07.653816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-28 01:09:07.653857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-28 01:09:07.653899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-28 01:09:07.653919 | orchestrator | 2026-01-28 01:09:07.653939 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-28 01:09:07.653958 | orchestrator | Wednesday 28 January 2026 01:06:51 +0000 (0:00:03.679) 0:00:30.346 ***** 2026-01-28 01:09:07.653974 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:09:07.653991 | orchestrator | 2026-01-28 01:09:07.654091 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-01-28 01:09:07.654118 | orchestrator | Wednesday 28 January 2026 01:06:52 +0000 (0:00:00.592) 0:00:30.939 ***** 2026-01-28 01:09:07.654138 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:09:07.654156 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:09:07.654188 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:09:07.654207 | orchestrator | 2026-01-28 01:09:07.654225 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-01-28 01:09:07.654243 | orchestrator | Wednesday 28 January 2026 01:06:56 +0000 (0:00:03.776) 0:00:34.715 ***** 2026-01-28 01:09:07.654264 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-28 01:09:07.654283 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-28 01:09:07.654312 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-28 01:09:07.654332 | orchestrator | 2026-01-28 01:09:07.654352 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-01-28 01:09:07.654372 | orchestrator | Wednesday 28 January 2026 01:06:57 +0000 (0:00:01.436) 0:00:36.152 ***** 2026-01-28 01:09:07.654391 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-28 01:09:07.654409 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-28 01:09:07.654427 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-01-28 01:09:07.654446 | orchestrator | 2026-01-28 01:09:07.654465 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-01-28 01:09:07.654484 | orchestrator | Wednesday 28 January 2026 01:06:58 +0000 (0:00:01.042) 0:00:37.195 ***** 2026-01-28 01:09:07.654503 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:09:07.654524 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:09:07.654543 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:09:07.654560 | orchestrator | 2026-01-28 01:09:07.654578 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-01-28 01:09:07.654596 | orchestrator | Wednesday 28 January 2026 01:06:59 +0000 (0:00:00.582) 0:00:37.777 ***** 2026-01-28 01:09:07.654615 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:07.654633 | orchestrator | 2026-01-28 01:09:07.654651 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-01-28 01:09:07.654737 | orchestrator | Wednesday 28 January 2026 01:06:59 +0000 (0:00:00.229) 0:00:38.007 ***** 2026-01-28 01:09:07.654757 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:07.654776 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:09:07.654793 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:09:07.654812 | orchestrator | 2026-01-28 01:09:07.654828 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-28 01:09:07.654847 | orchestrator | Wednesday 28 January 2026 01:06:59 +0000 (0:00:00.249) 0:00:38.257 ***** 2026-01-28 01:09:07.654866 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:09:07.654883 | orchestrator | 2026-01-28 01:09:07.654902 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-01-28 01:09:07.654919 | orchestrator | Wednesday 28 January 2026 01:07:00 +0000 (0:00:00.477) 0:00:38.734 ***** 2026-01-28 01:09:07.654962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-28 01:09:07.655014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-28 01:09:07.655038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-28 01:09:07.655069 | orchestrator | 2026-01-28 01:09:07.655088 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-01-28 01:09:07.655108 | orchestrator | Wednesday 28 January 2026 01:07:04 +0000 (0:00:04.052) 0:00:42.787 ***** 2026-01-28 01:09:07.655148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-28 01:09:07.655171 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:07.655189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-28 01:09:07.655208 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:09:07.655249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-28 01:09:07.655270 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:09:07.655288 | orchestrator | 2026-01-28 01:09:07.655307 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-01-28 01:09:07.655332 | orchestrator | Wednesday 28 January 2026 01:07:07 +0000 (0:00:03.326) 0:00:46.113 ***** 2026-01-28 01:09:07.655355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-28 01:09:07.655374 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:07.655403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-28 01:09:07.655433 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:09:07.655459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-01-28 01:09:07.655479 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:09:07.655497 | orchestrator | 2026-01-28 01:09:07.655516 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-01-28 01:09:07.655535 | orchestrator | Wednesday 28 January 2026 01:07:14 +0000 (0:00:06.858) 0:00:52.971 ***** 2026-01-28 01:09:07.655554 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:09:07.655571 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:07.655590 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:09:07.655608 | orchestrator | 2026-01-28 01:09:07.655626 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-01-28 01:09:07.655645 | orchestrator | Wednesday 28 January 2026 01:07:18 +0000 (0:00:03.832) 0:00:56.804 ***** 2026-01-28 01:09:07.655761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-28 01:09:07.655805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-28 01:09:07.655821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-28 01:09:07.655840 | orchestrator | 2026-01-28 01:09:07.655852 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-01-28 01:09:07.655863 | orchestrator | Wednesday 28 January 2026 01:07:22 +0000 (0:00:03.930) 0:01:00.735 ***** 2026-01-28 01:09:07.655873 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:09:07.655885 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:09:07.655895 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:09:07.655906 | orchestrator | 2026-01-28 01:09:07.655917 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-01-28 01:09:07.655928 | orchestrator | Wednesday 28 January 2026 01:07:29 +0000 (0:00:06.756) 0:01:07.491 ***** 2026-01-28 01:09:07.655939 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:07.655950 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:09:07.655961 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:09:07.655971 | orchestrator | 2026-01-28 01:09:07.655983 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-01-28 01:09:07.655994 | orchestrator | Wednesday 28 January 2026 01:07:32 +0000 (0:00:03.822) 0:01:11.313 ***** 2026-01-28 01:09:07.656005 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:09:07.656023 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:09:07.656034 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:07.656045 | orchestrator | 2026-01-28 01:09:07.656056 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-01-28 01:09:07.656067 | orchestrator | Wednesday 28 January 2026 01:07:37 +0000 (0:00:04.414) 0:01:15.727 ***** 2026-01-28 01:09:07.656077 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:09:07.656086 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:07.656096 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:09:07.656106 | orchestrator | 2026-01-28 01:09:07.656116 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-01-28 01:09:07.656125 | orchestrator | Wednesday 28 January 2026 01:07:42 +0000 (0:00:04.900) 0:01:20.628 ***** 2026-01-28 01:09:07.656135 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:09:07.656145 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:09:07.656155 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:07.656164 | orchestrator | 2026-01-28 01:09:07.656178 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-01-28 01:09:07.656189 | orchestrator | Wednesday 28 January 2026 01:07:47 +0000 (0:00:05.167) 0:01:25.795 ***** 2026-01-28 01:09:07.656199 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:07.656208 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:09:07.656218 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:09:07.656228 | orchestrator | 2026-01-28 01:09:07.656237 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-01-28 01:09:07.656247 | orchestrator | Wednesday 28 January 2026 01:07:47 +0000 (0:00:00.252) 0:01:26.047 ***** 2026-01-28 01:09:07.656257 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-28 01:09:07.656273 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:07.656283 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-28 01:09:07.656293 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:09:07.656303 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-01-28 01:09:07.656313 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:09:07.656323 | orchestrator | 2026-01-28 01:09:07.656333 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-01-28 01:09:07.656343 | orchestrator | Wednesday 28 January 2026 01:07:51 +0000 (0:00:03.327) 0:01:29.375 ***** 2026-01-28 01:09:07.656352 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:09:07.656362 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:09:07.656372 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:09:07.656382 | orchestrator | 2026-01-28 01:09:07.656391 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-01-28 01:09:07.656401 | orchestrator | Wednesday 28 January 2026 01:07:56 +0000 (0:00:05.934) 0:01:35.309 ***** 2026-01-28 01:09:07.656412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-28 01:09:07.656437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-28 01:09:07.656455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-01-28 01:09:07.656466 | orchestrator | 2026-01-28 01:09:07.656476 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-01-28 01:09:07.656486 | orchestrator | Wednesday 28 January 2026 01:08:01 +0000 (0:00:04.221) 0:01:39.531 ***** 2026-01-28 01:09:07.656496 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:07.656506 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:09:07.656515 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:09:07.656525 | orchestrator | 2026-01-28 01:09:07.656535 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-01-28 01:09:07.656545 | orchestrator | Wednesday 28 January 2026 01:08:01 +0000 (0:00:00.231) 0:01:39.762 ***** 2026-01-28 01:09:07.656555 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:09:07.656565 | orchestrator | 2026-01-28 01:09:07.656574 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-01-28 01:09:07.656584 | orchestrator | Wednesday 28 January 2026 01:08:03 +0000 (0:00:01.917) 0:01:41.679 ***** 2026-01-28 01:09:07.656594 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:09:07.656604 | orchestrator | 2026-01-28 01:09:07.656614 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-01-28 01:09:07.656623 | orchestrator | Wednesday 28 January 2026 01:08:05 +0000 (0:00:02.146) 0:01:43.826 ***** 2026-01-28 01:09:07.656633 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:09:07.656643 | orchestrator | 2026-01-28 01:09:07.656653 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-01-28 01:09:07.656688 | orchestrator | Wednesday 28 January 2026 01:08:07 +0000 (0:00:01.895) 0:01:45.722 ***** 2026-01-28 01:09:07.656699 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:09:07.656709 | orchestrator | 2026-01-28 01:09:07.656719 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-01-28 01:09:07.656741 | orchestrator | Wednesday 28 January 2026 01:08:34 +0000 (0:00:26.691) 0:02:12.413 ***** 2026-01-28 01:09:07.656751 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:09:07.656761 | orchestrator | 2026-01-28 01:09:07.656771 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-28 01:09:07.656780 | orchestrator | Wednesday 28 January 2026 01:08:36 +0000 (0:00:02.119) 0:02:14.532 ***** 2026-01-28 01:09:07.656790 | orchestrator | 2026-01-28 01:09:07.656800 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-28 01:09:07.656810 | orchestrator | Wednesday 28 January 2026 01:08:36 +0000 (0:00:00.388) 0:02:14.921 ***** 2026-01-28 01:09:07.656819 | orchestrator | 2026-01-28 01:09:07.656829 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-01-28 01:09:07.656839 | orchestrator | Wednesday 28 January 2026 01:08:36 +0000 (0:00:00.071) 0:02:14.992 ***** 2026-01-28 01:09:07.656848 | orchestrator | 2026-01-28 01:09:07.656858 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-01-28 01:09:07.656872 | orchestrator | Wednesday 28 January 2026 01:08:36 +0000 (0:00:00.071) 0:02:15.064 ***** 2026-01-28 01:09:07.656882 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:09:07.656892 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:09:07.656902 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:09:07.656911 | orchestrator | 2026-01-28 01:09:07.656921 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:09:07.656932 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-01-28 01:09:07.656942 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-28 01:09:07.656952 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-01-28 01:09:07.656962 | orchestrator | 2026-01-28 01:09:07.656971 | orchestrator | 2026-01-28 01:09:07.656981 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:09:07.656991 | orchestrator | Wednesday 28 January 2026 01:09:05 +0000 (0:00:28.509) 0:02:43.573 ***** 2026-01-28 01:09:07.657000 | orchestrator | =============================================================================== 2026-01-28 01:09:07.657010 | orchestrator | glance : Restart glance-api container ---------------------------------- 28.51s 2026-01-28 01:09:07.657020 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.69s 2026-01-28 01:09:07.657046 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 6.86s 2026-01-28 01:09:07.657056 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.76s 2026-01-28 01:09:07.657066 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.60s 2026-01-28 01:09:07.657076 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 5.93s 2026-01-28 01:09:07.657086 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.17s 2026-01-28 01:09:07.657095 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.90s 2026-01-28 01:09:07.657105 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.41s 2026-01-28 01:09:07.657115 | orchestrator | glance : Check glance containers ---------------------------------------- 4.22s 2026-01-28 01:09:07.657125 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.05s 2026-01-28 01:09:07.657134 | orchestrator | glance : Copying over config.json files for services -------------------- 3.93s 2026-01-28 01:09:07.657144 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.83s 2026-01-28 01:09:07.657154 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.82s 2026-01-28 01:09:07.657164 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.82s 2026-01-28 01:09:07.657177 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.80s 2026-01-28 01:09:07.657187 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.78s 2026-01-28 01:09:07.657196 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.68s 2026-01-28 01:09:07.657206 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.44s 2026-01-28 01:09:07.657216 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.33s 2026-01-28 01:09:07.657225 | orchestrator | 2026-01-28 01:09:07 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:07.657235 | orchestrator | 2026-01-28 01:09:07 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:10.695029 | orchestrator | 2026-01-28 01:09:10 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:09:10.698166 | orchestrator | 2026-01-28 01:09:10 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:10.700545 | orchestrator | 2026-01-28 01:09:10 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:09:10.702919 | orchestrator | 2026-01-28 01:09:10 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:10.703147 | orchestrator | 2026-01-28 01:09:10 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:13.751324 | orchestrator | 2026-01-28 01:09:13 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:09:13.751963 | orchestrator | 2026-01-28 01:09:13 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:13.755312 | orchestrator | 2026-01-28 01:09:13 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:09:13.756419 | orchestrator | 2026-01-28 01:09:13 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:13.756462 | orchestrator | 2026-01-28 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:16.796032 | orchestrator | 2026-01-28 01:09:16 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:09:16.796192 | orchestrator | 2026-01-28 01:09:16 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:16.798579 | orchestrator | 2026-01-28 01:09:16 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:09:16.799109 | orchestrator | 2026-01-28 01:09:16 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:16.799146 | orchestrator | 2026-01-28 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:19.848089 | orchestrator | 2026-01-28 01:09:19 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:09:19.848221 | orchestrator | 2026-01-28 01:09:19 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:19.848404 | orchestrator | 2026-01-28 01:09:19 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:09:19.848995 | orchestrator | 2026-01-28 01:09:19 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:19.849610 | orchestrator | 2026-01-28 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:22.885963 | orchestrator | 2026-01-28 01:09:22 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:09:22.889071 | orchestrator | 2026-01-28 01:09:22 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:22.889490 | orchestrator | 2026-01-28 01:09:22 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:09:22.891293 | orchestrator | 2026-01-28 01:09:22 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:22.891341 | orchestrator | 2026-01-28 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:25.929918 | orchestrator | 2026-01-28 01:09:25 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:09:25.931064 | orchestrator | 2026-01-28 01:09:25 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:25.932571 | orchestrator | 2026-01-28 01:09:25 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state STARTED 2026-01-28 01:09:25.934010 | orchestrator | 2026-01-28 01:09:25 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:25.934147 | orchestrator | 2026-01-28 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:28.971705 | orchestrator | 2026-01-28 01:09:28 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:09:28.972762 | orchestrator | 2026-01-28 01:09:28 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:28.977454 | orchestrator | 2026-01-28 01:09:28.977525 | orchestrator | 2026-01-28 01:09:28.977540 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 01:09:28.977553 | orchestrator | 2026-01-28 01:09:28.977564 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 01:09:28.977575 | orchestrator | Wednesday 28 January 2026 01:06:53 +0000 (0:00:00.188) 0:00:00.189 ***** 2026-01-28 01:09:28.977587 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:09:28.977599 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:09:28.977609 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:09:28.977620 | orchestrator | 2026-01-28 01:09:28.977631 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 01:09:28.977677 | orchestrator | Wednesday 28 January 2026 01:06:54 +0000 (0:00:00.300) 0:00:00.489 ***** 2026-01-28 01:09:28.977690 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-01-28 01:09:28.977701 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-01-28 01:09:28.977712 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-01-28 01:09:28.977723 | orchestrator | 2026-01-28 01:09:28.977734 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-01-28 01:09:28.977745 | orchestrator | 2026-01-28 01:09:28.977756 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-28 01:09:28.977768 | orchestrator | Wednesday 28 January 2026 01:06:54 +0000 (0:00:00.366) 0:00:00.856 ***** 2026-01-28 01:09:28.977779 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:09:28.977791 | orchestrator | 2026-01-28 01:09:28.977802 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-01-28 01:09:28.977813 | orchestrator | Wednesday 28 January 2026 01:06:54 +0000 (0:00:00.469) 0:00:01.325 ***** 2026-01-28 01:09:28.977824 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-01-28 01:09:28.977999 | orchestrator | 2026-01-28 01:09:28.978014 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-01-28 01:09:28.978076 | orchestrator | Wednesday 28 January 2026 01:06:58 +0000 (0:00:03.490) 0:00:04.816 ***** 2026-01-28 01:09:28.978089 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-01-28 01:09:28.978103 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-01-28 01:09:28.978116 | orchestrator | 2026-01-28 01:09:28.978145 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-01-28 01:09:28.978184 | orchestrator | Wednesday 28 January 2026 01:07:04 +0000 (0:00:05.624) 0:00:10.441 ***** 2026-01-28 01:09:28.978198 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-28 01:09:28.978211 | orchestrator | 2026-01-28 01:09:28.978223 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-01-28 01:09:28.978611 | orchestrator | Wednesday 28 January 2026 01:07:06 +0000 (0:00:02.896) 0:00:13.338 ***** 2026-01-28 01:09:28.978749 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-28 01:09:28.978951 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-01-28 01:09:28.978965 | orchestrator | 2026-01-28 01:09:28.978976 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-01-28 01:09:28.978987 | orchestrator | Wednesday 28 January 2026 01:07:10 +0000 (0:00:03.429) 0:00:16.768 ***** 2026-01-28 01:09:28.978998 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-28 01:09:28.979009 | orchestrator | 2026-01-28 01:09:28.979020 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-01-28 01:09:28.979031 | orchestrator | Wednesday 28 January 2026 01:07:13 +0000 (0:00:03.439) 0:00:20.207 ***** 2026-01-28 01:09:28.979043 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-01-28 01:09:28.979053 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-01-28 01:09:28.979064 | orchestrator | 2026-01-28 01:09:28.979075 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-01-28 01:09:28.979086 | orchestrator | Wednesday 28 January 2026 01:07:20 +0000 (0:00:06.812) 0:00:27.020 ***** 2026-01-28 01:09:28.979101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-28 01:09:28.979175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-28 01:09:28.979191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-28 01:09:28.979226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.979239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.979251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.979263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.979309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.979323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.979347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.979359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.979372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.979393 | orchestrator | 2026-01-28 01:09:28.979412 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-28 01:09:28.979431 | orchestrator | Wednesday 28 January 2026 01:07:22 +0000 (0:00:02.071) 0:00:29.092 ***** 2026-01-28 01:09:28.979452 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:28.979470 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:09:28.979481 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:09:28.979492 | orchestrator | 2026-01-28 01:09:28.979503 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-28 01:09:28.979515 | orchestrator | Wednesday 28 January 2026 01:07:23 +0000 (0:00:00.518) 0:00:29.610 ***** 2026-01-28 01:09:28.979525 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:09:28.979536 | orchestrator | 2026-01-28 01:09:28.979583 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-01-28 01:09:28.979598 | orchestrator | Wednesday 28 January 2026 01:07:24 +0000 (0:00:00.916) 0:00:30.526 ***** 2026-01-28 01:09:28.979611 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-01-28 01:09:28.979624 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-01-28 01:09:28.979636 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-01-28 01:09:28.979687 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-01-28 01:09:28.979710 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-01-28 01:09:28.979722 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-01-28 01:09:28.979735 | orchestrator | 2026-01-28 01:09:28.979748 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-01-28 01:09:28.979760 | orchestrator | Wednesday 28 January 2026 01:07:26 +0000 (0:00:02.439) 0:00:32.966 ***** 2026-01-28 01:09:28.979774 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-28 01:09:28.979794 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-28 01:09:28.979810 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-28 01:09:28.979824 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-28 01:09:28.979877 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-28 01:09:28.979898 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-01-28 01:09:28.979915 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-28 01:09:28.979928 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-28 01:09:28.979939 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-28 01:09:28.979987 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-28 01:09:28.980013 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-28 01:09:28.980030 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-01-28 01:09:28.980042 | orchestrator | 2026-01-28 01:09:28.980053 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-01-28 01:09:28.980064 | orchestrator | Wednesday 28 January 2026 01:07:30 +0000 (0:00:03.627) 0:00:36.594 ***** 2026-01-28 01:09:28.980075 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-28 01:09:28.980086 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-28 01:09:28.980097 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-01-28 01:09:28.980108 | orchestrator | 2026-01-28 01:09:28.980119 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-01-28 01:09:28.980130 | orchestrator | Wednesday 28 January 2026 01:07:32 +0000 (0:00:02.060) 0:00:38.655 ***** 2026-01-28 01:09:28.980141 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-01-28 01:09:28.980152 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-01-28 01:09:28.980162 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-01-28 01:09:28.980173 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-01-28 01:09:28.980183 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-01-28 01:09:28.980194 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-01-28 01:09:28.980205 | orchestrator | 2026-01-28 01:09:28.980215 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-01-28 01:09:28.980226 | orchestrator | Wednesday 28 January 2026 01:07:35 +0000 (0:00:03.505) 0:00:42.161 ***** 2026-01-28 01:09:28.980237 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-01-28 01:09:28.980248 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-01-28 01:09:28.980266 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-01-28 01:09:28.980277 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-01-28 01:09:28.980288 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-01-28 01:09:28.980298 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-01-28 01:09:28.980309 | orchestrator | 2026-01-28 01:09:28.980320 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-01-28 01:09:28.980331 | orchestrator | Wednesday 28 January 2026 01:07:36 +0000 (0:00:01.098) 0:00:43.260 ***** 2026-01-28 01:09:28.980341 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:28.980352 | orchestrator | 2026-01-28 01:09:28.980363 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-01-28 01:09:28.980373 | orchestrator | Wednesday 28 January 2026 01:07:36 +0000 (0:00:00.173) 0:00:43.433 ***** 2026-01-28 01:09:28.980384 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:28.980395 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:09:28.980437 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:09:28.980450 | orchestrator | 2026-01-28 01:09:28.980462 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-28 01:09:28.980474 | orchestrator | Wednesday 28 January 2026 01:07:37 +0000 (0:00:00.319) 0:00:43.752 ***** 2026-01-28 01:09:28.980485 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:09:28.980496 | orchestrator | 2026-01-28 01:09:28.980507 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-01-28 01:09:28.980517 | orchestrator | Wednesday 28 January 2026 01:07:37 +0000 (0:00:00.620) 0:00:44.373 ***** 2026-01-28 01:09:28.980529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-28 01:09:28.980546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-28 01:09:28.980558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-28 01:09:28.980585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.980629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.980790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.980825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.980846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.980857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.980879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.980950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.980964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.980976 | orchestrator | 2026-01-28 01:09:28.980987 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-01-28 01:09:28.980999 | orchestrator | Wednesday 28 January 2026 01:07:42 +0000 (0:00:04.323) 0:00:48.696 ***** 2026-01-28 01:09:28.981015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-28 01:09:28.981027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.981046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.981065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.981077 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:28.981088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-28 01:09:28.981100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.981116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.981135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.981146 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:09:28.981157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-28 01:09:28.981179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.981192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.981208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.981226 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:09:28.981236 | orchestrator | 2026-01-28 01:09:28.981246 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-01-28 01:09:28.981256 | orchestrator | Wednesday 28 January 2026 01:07:43 +0000 (0:00:01.696) 0:00:50.393 ***** 2026-01-28 01:09:28.981266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-28 01:09:28.981368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.981390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.981400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.981410 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:28.981427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-28 01:09:28.981446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.981456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.981466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.981481 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:09:28.981492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-28 01:09:28.981502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.981524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.981534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.981545 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:09:28.981555 | orchestrator | 2026-01-28 01:09:28.981565 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-01-28 01:09:28.981574 | orchestrator | Wednesday 28 January 2026 01:07:45 +0000 (0:00:01.879) 0:00:52.272 ***** 2026-01-28 01:09:28.981585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-28 01:09:28.981602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-28 01:09:28.981617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-28 01:09:28.981635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.981668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.981678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.981696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.981706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.981745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.981777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.981788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.981798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.981808 | orchestrator | 2026-01-28 01:09:28.981818 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-01-28 01:09:28.981828 | orchestrator | Wednesday 28 January 2026 01:07:50 +0000 (0:00:04.753) 0:00:57.025 ***** 2026-01-28 01:09:28.981838 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-28 01:09:28.981853 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-28 01:09:28.981863 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-01-28 01:09:28.981873 | orchestrator | 2026-01-28 01:09:28.981883 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-01-28 01:09:28.981893 | orchestrator | Wednesday 28 January 2026 01:07:52 +0000 (0:00:01.816) 0:00:58.842 ***** 2026-01-28 01:09:28.981903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-28 01:09:28.981924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-28 01:09:28.981935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-28 01:09:28.981945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.981962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'di2026-01-28 01:09:28 | INFO  | Task 6444899f-b4d2-407f-a9f0-26e6990e3b6b is in state SUCCESS 2026-01-28 01:09:28.981974 | orchestrator | mensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.981986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.982002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.982045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.982059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.982069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.982087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.982104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.982114 | orchestrator | 2026-01-28 01:09:28.982124 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-01-28 01:09:28.982134 | orchestrator | Wednesday 28 January 2026 01:08:05 +0000 (0:00:12.780) 0:01:11.622 ***** 2026-01-28 01:09:28.982144 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:09:28.982154 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:09:28.982163 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:09:28.982173 | orchestrator | 2026-01-28 01:09:28.982183 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-01-28 01:09:28.982193 | orchestrator | Wednesday 28 January 2026 01:08:06 +0000 (0:00:01.485) 0:01:13.108 ***** 2026-01-28 01:09:28.982203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-28 01:09:28.982241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.982253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.982269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.982285 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:28.982296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-28 01:09:28.982311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.982321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.982332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.982342 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:09:28.982358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-01-28 01:09:28.982375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.982385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.982400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-01-28 01:09:28.982410 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:09:28.982420 | orchestrator | 2026-01-28 01:09:28.982430 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-01-28 01:09:28.982440 | orchestrator | Wednesday 28 January 2026 01:08:07 +0000 (0:00:00.600) 0:01:13.709 ***** 2026-01-28 01:09:28.982450 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:28.982459 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:09:28.982469 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:09:28.982479 | orchestrator | 2026-01-28 01:09:28.982489 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-01-28 01:09:28.982499 | orchestrator | Wednesday 28 January 2026 01:08:07 +0000 (0:00:00.331) 0:01:14.040 ***** 2026-01-28 01:09:28.982509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-28 01:09:28.982531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-28 01:09:28.982542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-01-28 01:09:28.982556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.982567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.982577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.982587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.982609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.982619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.982637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.982667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.982677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-01-28 01:09:28.982694 | orchestrator | 2026-01-28 01:09:28.982704 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-01-28 01:09:28.982714 | orchestrator | Wednesday 28 January 2026 01:08:10 +0000 (0:00:02.548) 0:01:16.589 ***** 2026-01-28 01:09:28.982723 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:28.982733 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:09:28.982743 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:09:28.982752 | orchestrator | 2026-01-28 01:09:28.982762 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-01-28 01:09:28.982772 | orchestrator | Wednesday 28 January 2026 01:08:11 +0000 (0:00:00.955) 0:01:17.544 ***** 2026-01-28 01:09:28.982781 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:09:28.982791 | orchestrator | 2026-01-28 01:09:28.982800 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-01-28 01:09:28.982810 | orchestrator | Wednesday 28 January 2026 01:08:13 +0000 (0:00:01.953) 0:01:19.498 ***** 2026-01-28 01:09:28.982838 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:09:28.982849 | orchestrator | 2026-01-28 01:09:28.982858 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-01-28 01:09:28.982868 | orchestrator | Wednesday 28 January 2026 01:08:15 +0000 (0:00:02.381) 0:01:21.879 ***** 2026-01-28 01:09:28.982878 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:09:28.982897 | orchestrator | 2026-01-28 01:09:28.982907 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-28 01:09:28.982916 | orchestrator | Wednesday 28 January 2026 01:08:32 +0000 (0:00:16.783) 0:01:38.663 ***** 2026-01-28 01:09:28.982926 | orchestrator | 2026-01-28 01:09:28.982936 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-28 01:09:28.982945 | orchestrator | Wednesday 28 January 2026 01:08:32 +0000 (0:00:00.075) 0:01:38.738 ***** 2026-01-28 01:09:28.982955 | orchestrator | 2026-01-28 01:09:28.982965 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-01-28 01:09:28.982974 | orchestrator | Wednesday 28 January 2026 01:08:32 +0000 (0:00:00.072) 0:01:38.811 ***** 2026-01-28 01:09:28.982984 | orchestrator | 2026-01-28 01:09:28.982994 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-01-28 01:09:28.983003 | orchestrator | Wednesday 28 January 2026 01:08:32 +0000 (0:00:00.069) 0:01:38.881 ***** 2026-01-28 01:09:28.983013 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:09:28.983022 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:09:28.983032 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:09:28.983042 | orchestrator | 2026-01-28 01:09:28.983051 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-01-28 01:09:28.983080 | orchestrator | Wednesday 28 January 2026 01:08:54 +0000 (0:00:22.387) 0:02:01.268 ***** 2026-01-28 01:09:28.983101 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:09:28.983111 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:09:28.983121 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:09:28.983130 | orchestrator | 2026-01-28 01:09:28.983140 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-01-28 01:09:28.983150 | orchestrator | Wednesday 28 January 2026 01:09:05 +0000 (0:00:10.647) 0:02:11.915 ***** 2026-01-28 01:09:28.983160 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:09:28.983169 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:09:28.983179 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:09:28.983188 | orchestrator | 2026-01-28 01:09:28.983198 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-01-28 01:09:28.983212 | orchestrator | Wednesday 28 January 2026 01:09:21 +0000 (0:00:16.512) 0:02:28.428 ***** 2026-01-28 01:09:28.983229 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:09:28.983239 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:09:28.983249 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:09:28.983258 | orchestrator | 2026-01-28 01:09:28.983268 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-01-28 01:09:28.983277 | orchestrator | Wednesday 28 January 2026 01:09:27 +0000 (0:00:05.683) 0:02:34.112 ***** 2026-01-28 01:09:28.983287 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:09:28.983297 | orchestrator | 2026-01-28 01:09:28.983306 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:09:28.983317 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-01-28 01:09:28.983327 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-28 01:09:28.983337 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-28 01:09:28.983347 | orchestrator | 2026-01-28 01:09:28.983357 | orchestrator | 2026-01-28 01:09:28.983366 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:09:28.983376 | orchestrator | Wednesday 28 January 2026 01:09:27 +0000 (0:00:00.237) 0:02:34.349 ***** 2026-01-28 01:09:28.983386 | orchestrator | =============================================================================== 2026-01-28 01:09:28.983395 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.39s 2026-01-28 01:09:28.983405 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 16.78s 2026-01-28 01:09:28.983415 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 16.51s 2026-01-28 01:09:28.983424 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.78s 2026-01-28 01:09:28.983434 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.65s 2026-01-28 01:09:28.983443 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.81s 2026-01-28 01:09:28.983453 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.68s 2026-01-28 01:09:28.983472 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.62s 2026-01-28 01:09:28.983482 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.75s 2026-01-28 01:09:28.983492 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.32s 2026-01-28 01:09:28.983501 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.63s 2026-01-28 01:09:28.983511 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.51s 2026-01-28 01:09:28.983520 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.49s 2026-01-28 01:09:28.983530 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.44s 2026-01-28 01:09:28.983539 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.43s 2026-01-28 01:09:28.983556 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.90s 2026-01-28 01:09:28.983566 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.55s 2026-01-28 01:09:28.983575 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.44s 2026-01-28 01:09:28.983585 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.38s 2026-01-28 01:09:28.983594 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.07s 2026-01-28 01:09:28.983604 | orchestrator | 2026-01-28 01:09:28 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:28.983614 | orchestrator | 2026-01-28 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:32.023895 | orchestrator | 2026-01-28 01:09:32 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:09:32.024171 | orchestrator | 2026-01-28 01:09:32 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:32.024967 | orchestrator | 2026-01-28 01:09:32 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:32.024983 | orchestrator | 2026-01-28 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:35.070281 | orchestrator | 2026-01-28 01:09:35 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:09:35.072140 | orchestrator | 2026-01-28 01:09:35 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:35.073977 | orchestrator | 2026-01-28 01:09:35 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:35.074151 | orchestrator | 2026-01-28 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:38.117015 | orchestrator | 2026-01-28 01:09:38 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:09:38.118866 | orchestrator | 2026-01-28 01:09:38 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:38.119857 | orchestrator | 2026-01-28 01:09:38 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:38.119919 | orchestrator | 2026-01-28 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:41.167077 | orchestrator | 2026-01-28 01:09:41 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:09:41.168442 | orchestrator | 2026-01-28 01:09:41 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:41.170810 | orchestrator | 2026-01-28 01:09:41 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:41.170849 | orchestrator | 2026-01-28 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:44.210938 | orchestrator | 2026-01-28 01:09:44 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:09:44.212240 | orchestrator | 2026-01-28 01:09:44 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:44.213863 | orchestrator | 2026-01-28 01:09:44 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:44.213889 | orchestrator | 2026-01-28 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:47.264750 | orchestrator | 2026-01-28 01:09:47 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:09:47.269352 | orchestrator | 2026-01-28 01:09:47 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:47.269385 | orchestrator | 2026-01-28 01:09:47 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:47.269394 | orchestrator | 2026-01-28 01:09:47 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:50.310843 | orchestrator | 2026-01-28 01:09:50 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:09:50.312519 | orchestrator | 2026-01-28 01:09:50 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:50.316043 | orchestrator | 2026-01-28 01:09:50 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:50.316255 | orchestrator | 2026-01-28 01:09:50 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:53.362273 | orchestrator | 2026-01-28 01:09:53 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:09:53.362418 | orchestrator | 2026-01-28 01:09:53 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:53.363440 | orchestrator | 2026-01-28 01:09:53 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:53.363786 | orchestrator | 2026-01-28 01:09:53 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:56.406319 | orchestrator | 2026-01-28 01:09:56 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:09:56.407048 | orchestrator | 2026-01-28 01:09:56 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:56.408895 | orchestrator | 2026-01-28 01:09:56 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:56.408946 | orchestrator | 2026-01-28 01:09:56 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:09:59.445209 | orchestrator | 2026-01-28 01:09:59 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:09:59.445913 | orchestrator | 2026-01-28 01:09:59 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:09:59.446954 | orchestrator | 2026-01-28 01:09:59 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:09:59.446991 | orchestrator | 2026-01-28 01:09:59 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:02.485399 | orchestrator | 2026-01-28 01:10:02 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:02.486001 | orchestrator | 2026-01-28 01:10:02 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:02.487450 | orchestrator | 2026-01-28 01:10:02 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:02.487480 | orchestrator | 2026-01-28 01:10:02 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:05.532590 | orchestrator | 2026-01-28 01:10:05 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:05.533923 | orchestrator | 2026-01-28 01:10:05 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:05.535953 | orchestrator | 2026-01-28 01:10:05 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:05.535996 | orchestrator | 2026-01-28 01:10:05 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:08.580867 | orchestrator | 2026-01-28 01:10:08 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:08.582338 | orchestrator | 2026-01-28 01:10:08 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:08.584167 | orchestrator | 2026-01-28 01:10:08 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:08.584189 | orchestrator | 2026-01-28 01:10:08 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:11.631386 | orchestrator | 2026-01-28 01:10:11 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:11.632266 | orchestrator | 2026-01-28 01:10:11 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:11.633891 | orchestrator | 2026-01-28 01:10:11 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:11.633950 | orchestrator | 2026-01-28 01:10:11 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:14.671204 | orchestrator | 2026-01-28 01:10:14 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:14.672989 | orchestrator | 2026-01-28 01:10:14 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:14.675106 | orchestrator | 2026-01-28 01:10:14 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:14.675161 | orchestrator | 2026-01-28 01:10:14 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:17.715552 | orchestrator | 2026-01-28 01:10:17 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:17.717734 | orchestrator | 2026-01-28 01:10:17 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:17.721135 | orchestrator | 2026-01-28 01:10:17 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:17.721172 | orchestrator | 2026-01-28 01:10:17 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:20.767710 | orchestrator | 2026-01-28 01:10:20 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:20.768909 | orchestrator | 2026-01-28 01:10:20 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:20.770168 | orchestrator | 2026-01-28 01:10:20 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:20.770204 | orchestrator | 2026-01-28 01:10:20 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:23.808977 | orchestrator | 2026-01-28 01:10:23 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:23.810712 | orchestrator | 2026-01-28 01:10:23 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:23.812379 | orchestrator | 2026-01-28 01:10:23 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:23.812556 | orchestrator | 2026-01-28 01:10:23 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:26.857225 | orchestrator | 2026-01-28 01:10:26 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:26.858980 | orchestrator | 2026-01-28 01:10:26 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:26.860969 | orchestrator | 2026-01-28 01:10:26 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:26.861479 | orchestrator | 2026-01-28 01:10:26 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:29.918551 | orchestrator | 2026-01-28 01:10:29 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:29.919884 | orchestrator | 2026-01-28 01:10:29 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:29.921675 | orchestrator | 2026-01-28 01:10:29 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:29.921829 | orchestrator | 2026-01-28 01:10:29 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:32.961710 | orchestrator | 2026-01-28 01:10:32 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:32.967277 | orchestrator | 2026-01-28 01:10:32 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:32.969776 | orchestrator | 2026-01-28 01:10:32 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:32.969836 | orchestrator | 2026-01-28 01:10:32 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:36.027871 | orchestrator | 2026-01-28 01:10:36 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:36.027976 | orchestrator | 2026-01-28 01:10:36 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:36.029005 | orchestrator | 2026-01-28 01:10:36 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:36.029037 | orchestrator | 2026-01-28 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:39.064052 | orchestrator | 2026-01-28 01:10:39 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:39.064267 | orchestrator | 2026-01-28 01:10:39 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:39.065637 | orchestrator | 2026-01-28 01:10:39 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:39.065679 | orchestrator | 2026-01-28 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:42.113416 | orchestrator | 2026-01-28 01:10:42 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:42.116448 | orchestrator | 2026-01-28 01:10:42 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:42.119790 | orchestrator | 2026-01-28 01:10:42 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:42.119860 | orchestrator | 2026-01-28 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:45.163363 | orchestrator | 2026-01-28 01:10:45 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:45.165806 | orchestrator | 2026-01-28 01:10:45 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:45.167001 | orchestrator | 2026-01-28 01:10:45 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:45.167405 | orchestrator | 2026-01-28 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:48.203750 | orchestrator | 2026-01-28 01:10:48 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:48.204136 | orchestrator | 2026-01-28 01:10:48 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:48.205115 | orchestrator | 2026-01-28 01:10:48 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:48.205149 | orchestrator | 2026-01-28 01:10:48 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:51.253684 | orchestrator | 2026-01-28 01:10:51 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:51.255422 | orchestrator | 2026-01-28 01:10:51 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:51.257347 | orchestrator | 2026-01-28 01:10:51 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:51.257391 | orchestrator | 2026-01-28 01:10:51 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:54.305853 | orchestrator | 2026-01-28 01:10:54 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:54.307446 | orchestrator | 2026-01-28 01:10:54 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:54.310118 | orchestrator | 2026-01-28 01:10:54 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:54.310198 | orchestrator | 2026-01-28 01:10:54 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:10:57.355141 | orchestrator | 2026-01-28 01:10:57 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:10:57.357499 | orchestrator | 2026-01-28 01:10:57 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:10:57.359599 | orchestrator | 2026-01-28 01:10:57 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:10:57.359689 | orchestrator | 2026-01-28 01:10:57 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:00.410149 | orchestrator | 2026-01-28 01:11:00 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:11:00.411815 | orchestrator | 2026-01-28 01:11:00 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:11:00.413920 | orchestrator | 2026-01-28 01:11:00 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:00.413946 | orchestrator | 2026-01-28 01:11:00 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:03.461999 | orchestrator | 2026-01-28 01:11:03 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:11:03.464243 | orchestrator | 2026-01-28 01:11:03 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:11:03.467163 | orchestrator | 2026-01-28 01:11:03 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:03.467250 | orchestrator | 2026-01-28 01:11:03 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:06.513991 | orchestrator | 2026-01-28 01:11:06 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:11:06.515404 | orchestrator | 2026-01-28 01:11:06 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:11:06.516776 | orchestrator | 2026-01-28 01:11:06 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:06.516829 | orchestrator | 2026-01-28 01:11:06 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:09.569176 | orchestrator | 2026-01-28 01:11:09 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state STARTED 2026-01-28 01:11:09.571813 | orchestrator | 2026-01-28 01:11:09 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:11:09.574257 | orchestrator | 2026-01-28 01:11:09 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:09.574340 | orchestrator | 2026-01-28 01:11:09 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:12.621905 | orchestrator | 2026-01-28 01:11:12 | INFO  | Task b92f6816-a5e6-4a72-8b42-9dfc7587c1e4 is in state SUCCESS 2026-01-28 01:11:12.623694 | orchestrator | 2026-01-28 01:11:12.623741 | orchestrator | 2026-01-28 01:11:12.623753 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 01:11:12.623764 | orchestrator | 2026-01-28 01:11:12.623775 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 01:11:12.623785 | orchestrator | Wednesday 28 January 2026 01:09:10 +0000 (0:00:00.279) 0:00:00.279 ***** 2026-01-28 01:11:12.623795 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:11:12.623806 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:11:12.623815 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:11:12.623824 | orchestrator | 2026-01-28 01:11:12.623833 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 01:11:12.623841 | orchestrator | Wednesday 28 January 2026 01:09:11 +0000 (0:00:00.315) 0:00:00.595 ***** 2026-01-28 01:11:12.623850 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-01-28 01:11:12.623858 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-01-28 01:11:12.623867 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-01-28 01:11:12.623876 | orchestrator | 2026-01-28 01:11:12.623884 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-01-28 01:11:12.623892 | orchestrator | 2026-01-28 01:11:12.623902 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-28 01:11:12.623910 | orchestrator | Wednesday 28 January 2026 01:09:11 +0000 (0:00:00.485) 0:00:01.080 ***** 2026-01-28 01:11:12.623946 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:11:12.623956 | orchestrator | 2026-01-28 01:11:12.623964 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-01-28 01:11:12.623973 | orchestrator | Wednesday 28 January 2026 01:09:12 +0000 (0:00:00.548) 0:00:01.628 ***** 2026-01-28 01:11:12.623984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-28 01:11:12.624010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-28 01:11:12.624021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-28 01:11:12.624029 | orchestrator | 2026-01-28 01:11:12.624038 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-01-28 01:11:12.624046 | orchestrator | Wednesday 28 January 2026 01:09:12 +0000 (0:00:00.695) 0:00:02.324 ***** 2026-01-28 01:11:12.624556 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-01-28 01:11:12.624615 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-01-28 01:11:12.624627 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-28 01:11:12.624636 | orchestrator | 2026-01-28 01:11:12.624645 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-01-28 01:11:12.624729 | orchestrator | Wednesday 28 January 2026 01:09:13 +0000 (0:00:00.837) 0:00:03.162 ***** 2026-01-28 01:11:12.624738 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:11:12.624748 | orchestrator | 2026-01-28 01:11:12.624758 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-01-28 01:11:12.624768 | orchestrator | Wednesday 28 January 2026 01:09:14 +0000 (0:00:00.781) 0:00:03.943 ***** 2026-01-28 01:11:12.624793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-28 01:11:12.624816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-28 01:11:12.624826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-28 01:11:12.624835 | orchestrator | 2026-01-28 01:11:12.624845 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-01-28 01:11:12.624854 | orchestrator | Wednesday 28 January 2026 01:09:15 +0000 (0:00:01.350) 0:00:05.294 ***** 2026-01-28 01:11:12.624871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-28 01:11:12.624881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-28 01:11:12.624890 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:11:12.625157 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:11:12.625194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-28 01:11:12.625215 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:11:12.625225 | orchestrator | 2026-01-28 01:11:12.625234 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-01-28 01:11:12.625244 | orchestrator | Wednesday 28 January 2026 01:09:16 +0000 (0:00:00.439) 0:00:05.734 ***** 2026-01-28 01:11:12.625254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-28 01:11:12.625263 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:11:12.625273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-28 01:11:12.625283 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:11:12.625299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-01-28 01:11:12.625308 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:11:12.625318 | orchestrator | 2026-01-28 01:11:12.625327 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-01-28 01:11:12.625335 | orchestrator | Wednesday 28 January 2026 01:09:17 +0000 (0:00:00.854) 0:00:06.588 ***** 2026-01-28 01:11:12.625343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-28 01:11:12.625367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-28 01:11:12.625384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-28 01:11:12.625391 | orchestrator | 2026-01-28 01:11:12.625398 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-01-28 01:11:12.625406 | orchestrator | Wednesday 28 January 2026 01:09:18 +0000 (0:00:01.198) 0:00:07.787 ***** 2026-01-28 01:11:12.625413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-28 01:11:12.625421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-28 01:11:12.625434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-28 01:11:12.625441 | orchestrator | 2026-01-28 01:11:12.625447 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-01-28 01:11:12.625454 | orchestrator | Wednesday 28 January 2026 01:09:19 +0000 (0:00:01.250) 0:00:09.037 ***** 2026-01-28 01:11:12.625462 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:11:12.625469 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:11:12.625475 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:11:12.625482 | orchestrator | 2026-01-28 01:11:12.625495 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-01-28 01:11:12.625502 | orchestrator | Wednesday 28 January 2026 01:09:20 +0000 (0:00:00.566) 0:00:09.603 ***** 2026-01-28 01:11:12.625510 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-28 01:11:12.625518 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-28 01:11:12.625525 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-01-28 01:11:12.625532 | orchestrator | 2026-01-28 01:11:12.625539 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-01-28 01:11:12.625546 | orchestrator | Wednesday 28 January 2026 01:09:21 +0000 (0:00:01.117) 0:00:10.720 ***** 2026-01-28 01:11:12.625553 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-28 01:11:12.625600 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-28 01:11:12.625610 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-01-28 01:11:12.625617 | orchestrator | 2026-01-28 01:11:12.625624 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-01-28 01:11:12.625633 | orchestrator | Wednesday 28 January 2026 01:09:22 +0000 (0:00:01.093) 0:00:11.814 ***** 2026-01-28 01:11:12.625640 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-28 01:11:12.625647 | orchestrator | 2026-01-28 01:11:12.625654 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-01-28 01:11:12.625661 | orchestrator | Wednesday 28 January 2026 01:09:23 +0000 (0:00:00.826) 0:00:12.641 ***** 2026-01-28 01:11:12.625669 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-01-28 01:11:12.625676 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-01-28 01:11:12.625684 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:11:12.625691 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:11:12.625699 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:11:12.625706 | orchestrator | 2026-01-28 01:11:12.625713 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-01-28 01:11:12.625721 | orchestrator | Wednesday 28 January 2026 01:09:23 +0000 (0:00:00.790) 0:00:13.431 ***** 2026-01-28 01:11:12.625728 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:11:12.625734 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:11:12.625742 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:11:12.625749 | orchestrator | 2026-01-28 01:11:12.625756 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-01-28 01:11:12.625763 | orchestrator | Wednesday 28 January 2026 01:09:24 +0000 (0:00:00.432) 0:00:13.863 ***** 2026-01-28 01:11:12.625771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1331073, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.344494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.625784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1331073, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.344494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.625799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1331073, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.344494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.625808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1331163, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.356032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.625840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1331163, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.356032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.625849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1331163, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.356032, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.625857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1331097, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3466804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.625866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1331097, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3466804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.625883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1331097, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3466804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.625892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1331167, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3579545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.625917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1331167, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3579545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.625928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1331167, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3579545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.625937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1331125, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.350462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.625946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1331125, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.350462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.625968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1331125, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.350462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.625977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1331152, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3544621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.625986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1331152, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3544621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1331152, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3544621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1331068, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3419914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1331068, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3419914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1331068, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3419914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1331087, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.345212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1331087, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.345212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1331087, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.345212, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1331102, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3472712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1331102, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3472712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1331102, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3472712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1331135, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3524048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1331135, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3524048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1331135, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3524048, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1331158, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3554974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1331158, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3554974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1331158, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3554974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1331090, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.346366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1331090, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.346366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1331090, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.346366, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1331150, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3536842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1331150, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3536842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1331150, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3536842, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1331128, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3514771, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1331128, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3514771, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1331128, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3514771, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1331119, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.349462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1331119, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.349462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1331119, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.349462, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1331114, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3491967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1331114, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3491967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1331114, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3491967, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1331140, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3535151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1331140, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3535151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1331140, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3535151, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1331108, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.348437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1331108, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.348437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1331108, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.348437, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1331156, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3544621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1331156, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3544621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1331156, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3544621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1331343, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3926973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1331343, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3926973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1331343, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3926973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1331220, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3699837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1331220, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3699837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1331220, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3699837, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1331198, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3614886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1331198, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3614886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1331198, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3614886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1331256, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.372727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1331256, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.372727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1331256, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.372727, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1331181, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3592315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1331181, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3592315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1331181, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3592315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1331298, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3837686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1331298, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3837686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1331298, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3837686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1331260, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3812082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1331260, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3812082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1331260, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3812082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1331303, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3844626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1331303, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3844626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1331303, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3844626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1331335, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3913906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1331335, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3913906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1331335, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3913906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1331294, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3828757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1331294, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3828757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1331294, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3828757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.626956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1331248, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3714716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1331248, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3714716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1331248, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3714716, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1331215, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3648808, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1331215, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3648808, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1331215, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3648808, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1331242, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3708706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1331242, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3708706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1331242, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3708706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1331203, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3634622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1331203, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3634622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1331203, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3634622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1331251, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3722668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1331251, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3722668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1331251, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3722668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1331319, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3902674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1331319, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3902674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1331319, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3902674, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1331314, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.387829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1331314, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.387829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1331314, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.387829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1331185, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3594623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1331185, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3594623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1331185, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3594623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1331191, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.360965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1331191, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.360965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1331191, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.360965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1331289, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.382128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1331289, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.382128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1331289, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.382128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1331311, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3854625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1331311, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3854625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1331311, 'dev': 143, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1769559572.3854625, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-01-28 01:11:12.627388 | orchestrator | 2026-01-28 01:11:12.627396 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-01-28 01:11:12.627404 | orchestrator | Wednesday 28 January 2026 01:09:57 +0000 (0:00:33.244) 0:00:47.108 ***** 2026-01-28 01:11:12.627415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-28 01:11:12.627423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-28 01:11:12.627437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-01-28 01:11:12.627444 | orchestrator | 2026-01-28 01:11:12.627451 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-01-28 01:11:12.627458 | orchestrator | Wednesday 28 January 2026 01:09:58 +0000 (0:00:00.909) 0:00:48.017 ***** 2026-01-28 01:11:12.627466 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:11:12.627473 | orchestrator | 2026-01-28 01:11:12.627480 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-01-28 01:11:12.627491 | orchestrator | Wednesday 28 January 2026 01:10:00 +0000 (0:00:01.912) 0:00:49.930 ***** 2026-01-28 01:11:12.627498 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:11:12.627506 | orchestrator | 2026-01-28 01:11:12.627512 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-28 01:11:12.627519 | orchestrator | Wednesday 28 January 2026 01:10:02 +0000 (0:00:02.064) 0:00:51.995 ***** 2026-01-28 01:11:12.627527 | orchestrator | 2026-01-28 01:11:12.627533 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-28 01:11:12.627541 | orchestrator | Wednesday 28 January 2026 01:10:02 +0000 (0:00:00.066) 0:00:52.061 ***** 2026-01-28 01:11:12.627548 | orchestrator | 2026-01-28 01:11:12.627555 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-01-28 01:11:12.627562 | orchestrator | Wednesday 28 January 2026 01:10:02 +0000 (0:00:00.061) 0:00:52.123 ***** 2026-01-28 01:11:12.627570 | orchestrator | 2026-01-28 01:11:12.627578 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-01-28 01:11:12.627637 | orchestrator | Wednesday 28 January 2026 01:10:02 +0000 (0:00:00.230) 0:00:52.353 ***** 2026-01-28 01:11:12.627645 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:11:12.627652 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:11:12.627659 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:11:12.627666 | orchestrator | 2026-01-28 01:11:12.627674 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-01-28 01:11:12.627681 | orchestrator | Wednesday 28 January 2026 01:10:04 +0000 (0:00:01.681) 0:00:54.035 ***** 2026-01-28 01:11:12.627689 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:11:12.627697 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:11:12.627706 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-01-28 01:11:12.627715 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-01-28 01:11:12.627725 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-01-28 01:11:12.627733 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:11:12.627741 | orchestrator | 2026-01-28 01:11:12.627748 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-01-28 01:11:12.627756 | orchestrator | Wednesday 28 January 2026 01:10:42 +0000 (0:00:37.684) 0:01:31.719 ***** 2026-01-28 01:11:12.627763 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:11:12.627771 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:11:12.627784 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:11:12.627791 | orchestrator | 2026-01-28 01:11:12.627798 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-01-28 01:11:12.627806 | orchestrator | Wednesday 28 January 2026 01:11:06 +0000 (0:00:24.591) 0:01:56.310 ***** 2026-01-28 01:11:12.627813 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:11:12.627821 | orchestrator | 2026-01-28 01:11:12.627828 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-01-28 01:11:12.627836 | orchestrator | Wednesday 28 January 2026 01:11:08 +0000 (0:00:02.026) 0:01:58.336 ***** 2026-01-28 01:11:12.627845 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:11:12.627853 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:11:12.627861 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:11:12.627868 | orchestrator | 2026-01-28 01:11:12.627876 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-01-28 01:11:12.627883 | orchestrator | Wednesday 28 January 2026 01:11:09 +0000 (0:00:00.477) 0:01:58.813 ***** 2026-01-28 01:11:12.627898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-01-28 01:11:12.627910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-01-28 01:11:12.627920 | orchestrator | 2026-01-28 01:11:12.627929 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-01-28 01:11:12.627938 | orchestrator | Wednesday 28 January 2026 01:11:11 +0000 (0:00:02.027) 0:02:00.841 ***** 2026-01-28 01:11:12.627947 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:11:12.627956 | orchestrator | 2026-01-28 01:11:12.627966 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:11:12.627976 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-28 01:11:12.627987 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-28 01:11:12.627997 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-28 01:11:12.628006 | orchestrator | 2026-01-28 01:11:12.628013 | orchestrator | 2026-01-28 01:11:12.628021 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:11:12.628028 | orchestrator | Wednesday 28 January 2026 01:11:11 +0000 (0:00:00.269) 0:02:01.110 ***** 2026-01-28 01:11:12.628035 | orchestrator | =============================================================================== 2026-01-28 01:11:12.628048 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 37.68s 2026-01-28 01:11:12.628056 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 33.24s 2026-01-28 01:11:12.628062 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 24.59s 2026-01-28 01:11:12.628069 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.06s 2026-01-28 01:11:12.628075 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.03s 2026-01-28 01:11:12.628082 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.03s 2026-01-28 01:11:12.628089 | orchestrator | grafana : Creating grafana database ------------------------------------- 1.91s 2026-01-28 01:11:12.628095 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.68s 2026-01-28 01:11:12.628108 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.35s 2026-01-28 01:11:12.628115 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.25s 2026-01-28 01:11:12.628122 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.20s 2026-01-28 01:11:12.628129 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.12s 2026-01-28 01:11:12.628136 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.09s 2026-01-28 01:11:12.628143 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.91s 2026-01-28 01:11:12.628150 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.85s 2026-01-28 01:11:12.628157 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.84s 2026-01-28 01:11:12.628164 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.83s 2026-01-28 01:11:12.628170 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.79s 2026-01-28 01:11:12.628177 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.78s 2026-01-28 01:11:12.628183 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.70s 2026-01-28 01:11:12.628191 | orchestrator | 2026-01-28 01:11:12 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state STARTED 2026-01-28 01:11:12.629877 | orchestrator | 2026-01-28 01:11:12 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:12.630388 | orchestrator | 2026-01-28 01:11:12 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:15.683058 | orchestrator | 2026-01-28 01:11:15 | INFO  | Task 6e4b4461-2b3b-46ce-9aae-cb03a9b8bbdd is in state SUCCESS 2026-01-28 01:11:15.683158 | orchestrator | 2026-01-28 01:11:15 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:11:15.686304 | orchestrator | 2026-01-28 01:11:15 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:15.686399 | orchestrator | 2026-01-28 01:11:15 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:18.732745 | orchestrator | 2026-01-28 01:11:18 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:11:18.735477 | orchestrator | 2026-01-28 01:11:18 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:18.735516 | orchestrator | 2026-01-28 01:11:18 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:21.784186 | orchestrator | 2026-01-28 01:11:21 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:11:21.785830 | orchestrator | 2026-01-28 01:11:21 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:21.786100 | orchestrator | 2026-01-28 01:11:21 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:24.837068 | orchestrator | 2026-01-28 01:11:24 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:11:24.838850 | orchestrator | 2026-01-28 01:11:24 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:24.838919 | orchestrator | 2026-01-28 01:11:24 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:27.887967 | orchestrator | 2026-01-28 01:11:27 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:11:27.888439 | orchestrator | 2026-01-28 01:11:27 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:27.888502 | orchestrator | 2026-01-28 01:11:27 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:30.925375 | orchestrator | 2026-01-28 01:11:30 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:11:30.926918 | orchestrator | 2026-01-28 01:11:30 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:30.926980 | orchestrator | 2026-01-28 01:11:30 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:33.969703 | orchestrator | 2026-01-28 01:11:33 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:11:33.972025 | orchestrator | 2026-01-28 01:11:33 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:33.972069 | orchestrator | 2026-01-28 01:11:33 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:37.016305 | orchestrator | 2026-01-28 01:11:37 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:11:37.018421 | orchestrator | 2026-01-28 01:11:37 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:37.018454 | orchestrator | 2026-01-28 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:40.063049 | orchestrator | 2026-01-28 01:11:40 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:11:40.063351 | orchestrator | 2026-01-28 01:11:40 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:40.063385 | orchestrator | 2026-01-28 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:43.098166 | orchestrator | 2026-01-28 01:11:43 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:11:43.099828 | orchestrator | 2026-01-28 01:11:43 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:43.099885 | orchestrator | 2026-01-28 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:46.140039 | orchestrator | 2026-01-28 01:11:46 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:11:46.140141 | orchestrator | 2026-01-28 01:11:46 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:46.140156 | orchestrator | 2026-01-28 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:49.188145 | orchestrator | 2026-01-28 01:11:49 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:11:49.189894 | orchestrator | 2026-01-28 01:11:49 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:49.189943 | orchestrator | 2026-01-28 01:11:49 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:52.237729 | orchestrator | 2026-01-28 01:11:52 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:11:52.241218 | orchestrator | 2026-01-28 01:11:52 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:52.241243 | orchestrator | 2026-01-28 01:11:52 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:55.293842 | orchestrator | 2026-01-28 01:11:55 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:11:55.295075 | orchestrator | 2026-01-28 01:11:55 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:55.295286 | orchestrator | 2026-01-28 01:11:55 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:11:58.347061 | orchestrator | 2026-01-28 01:11:58 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:11:58.347166 | orchestrator | 2026-01-28 01:11:58 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:11:58.347182 | orchestrator | 2026-01-28 01:11:58 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:01.386665 | orchestrator | 2026-01-28 01:12:01 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:01.387070 | orchestrator | 2026-01-28 01:12:01 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:01.387100 | orchestrator | 2026-01-28 01:12:01 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:04.432038 | orchestrator | 2026-01-28 01:12:04 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:04.434275 | orchestrator | 2026-01-28 01:12:04 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:04.434312 | orchestrator | 2026-01-28 01:12:04 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:07.476784 | orchestrator | 2026-01-28 01:12:07 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:07.476892 | orchestrator | 2026-01-28 01:12:07 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:07.476909 | orchestrator | 2026-01-28 01:12:07 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:10.520073 | orchestrator | 2026-01-28 01:12:10 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:10.520714 | orchestrator | 2026-01-28 01:12:10 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:10.520738 | orchestrator | 2026-01-28 01:12:10 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:13.559770 | orchestrator | 2026-01-28 01:12:13 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:13.560435 | orchestrator | 2026-01-28 01:12:13 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:13.560469 | orchestrator | 2026-01-28 01:12:13 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:16.604188 | orchestrator | 2026-01-28 01:12:16 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:16.605556 | orchestrator | 2026-01-28 01:12:16 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:16.605606 | orchestrator | 2026-01-28 01:12:16 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:19.640338 | orchestrator | 2026-01-28 01:12:19 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:19.644591 | orchestrator | 2026-01-28 01:12:19 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:19.644642 | orchestrator | 2026-01-28 01:12:19 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:22.673014 | orchestrator | 2026-01-28 01:12:22 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:22.674923 | orchestrator | 2026-01-28 01:12:22 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:22.675121 | orchestrator | 2026-01-28 01:12:22 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:25.729343 | orchestrator | 2026-01-28 01:12:25 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:25.732403 | orchestrator | 2026-01-28 01:12:25 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:25.732479 | orchestrator | 2026-01-28 01:12:25 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:28.769318 | orchestrator | 2026-01-28 01:12:28 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:28.769421 | orchestrator | 2026-01-28 01:12:28 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:28.769472 | orchestrator | 2026-01-28 01:12:28 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:31.803083 | orchestrator | 2026-01-28 01:12:31 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:31.804690 | orchestrator | 2026-01-28 01:12:31 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:31.805022 | orchestrator | 2026-01-28 01:12:31 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:34.850343 | orchestrator | 2026-01-28 01:12:34 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:34.851974 | orchestrator | 2026-01-28 01:12:34 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:34.852030 | orchestrator | 2026-01-28 01:12:34 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:37.897908 | orchestrator | 2026-01-28 01:12:37 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:37.898072 | orchestrator | 2026-01-28 01:12:37 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:37.898091 | orchestrator | 2026-01-28 01:12:37 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:40.938084 | orchestrator | 2026-01-28 01:12:40 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:40.939608 | orchestrator | 2026-01-28 01:12:40 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:40.939720 | orchestrator | 2026-01-28 01:12:40 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:43.980872 | orchestrator | 2026-01-28 01:12:43 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:43.982587 | orchestrator | 2026-01-28 01:12:43 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:43.982630 | orchestrator | 2026-01-28 01:12:43 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:47.050903 | orchestrator | 2026-01-28 01:12:47 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:47.051990 | orchestrator | 2026-01-28 01:12:47 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:47.052013 | orchestrator | 2026-01-28 01:12:47 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:50.104672 | orchestrator | 2026-01-28 01:12:50 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:50.106262 | orchestrator | 2026-01-28 01:12:50 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:50.106308 | orchestrator | 2026-01-28 01:12:50 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:53.159343 | orchestrator | 2026-01-28 01:12:53 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:53.160813 | orchestrator | 2026-01-28 01:12:53 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:53.160914 | orchestrator | 2026-01-28 01:12:53 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:56.210674 | orchestrator | 2026-01-28 01:12:56 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:56.211041 | orchestrator | 2026-01-28 01:12:56 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:56.211070 | orchestrator | 2026-01-28 01:12:56 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:12:59.255821 | orchestrator | 2026-01-28 01:12:59 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:12:59.258129 | orchestrator | 2026-01-28 01:12:59 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:12:59.258185 | orchestrator | 2026-01-28 01:12:59 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:02.307525 | orchestrator | 2026-01-28 01:13:02 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:02.309582 | orchestrator | 2026-01-28 01:13:02 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:02.309647 | orchestrator | 2026-01-28 01:13:02 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:05.356257 | orchestrator | 2026-01-28 01:13:05 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:05.357180 | orchestrator | 2026-01-28 01:13:05 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:05.357255 | orchestrator | 2026-01-28 01:13:05 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:08.407892 | orchestrator | 2026-01-28 01:13:08 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:08.409867 | orchestrator | 2026-01-28 01:13:08 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:08.410666 | orchestrator | 2026-01-28 01:13:08 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:11.463572 | orchestrator | 2026-01-28 01:13:11 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:11.465299 | orchestrator | 2026-01-28 01:13:11 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:11.465498 | orchestrator | 2026-01-28 01:13:11 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:14.509318 | orchestrator | 2026-01-28 01:13:14 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:14.511995 | orchestrator | 2026-01-28 01:13:14 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:14.512075 | orchestrator | 2026-01-28 01:13:14 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:17.549485 | orchestrator | 2026-01-28 01:13:17 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:17.552321 | orchestrator | 2026-01-28 01:13:17 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:17.552403 | orchestrator | 2026-01-28 01:13:17 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:20.590857 | orchestrator | 2026-01-28 01:13:20 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:20.592708 | orchestrator | 2026-01-28 01:13:20 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:20.592739 | orchestrator | 2026-01-28 01:13:20 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:23.634763 | orchestrator | 2026-01-28 01:13:23 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:23.634986 | orchestrator | 2026-01-28 01:13:23 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:23.635009 | orchestrator | 2026-01-28 01:13:23 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:26.670591 | orchestrator | 2026-01-28 01:13:26 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:26.671090 | orchestrator | 2026-01-28 01:13:26 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:26.671121 | orchestrator | 2026-01-28 01:13:26 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:29.714330 | orchestrator | 2026-01-28 01:13:29 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:29.715928 | orchestrator | 2026-01-28 01:13:29 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:29.716010 | orchestrator | 2026-01-28 01:13:29 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:32.755637 | orchestrator | 2026-01-28 01:13:32 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:32.758507 | orchestrator | 2026-01-28 01:13:32 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:32.758588 | orchestrator | 2026-01-28 01:13:32 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:35.801636 | orchestrator | 2026-01-28 01:13:35 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:35.802760 | orchestrator | 2026-01-28 01:13:35 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:35.802819 | orchestrator | 2026-01-28 01:13:35 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:38.842452 | orchestrator | 2026-01-28 01:13:38 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:38.844152 | orchestrator | 2026-01-28 01:13:38 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:38.844212 | orchestrator | 2026-01-28 01:13:38 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:41.869948 | orchestrator | 2026-01-28 01:13:41 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:41.870104 | orchestrator | 2026-01-28 01:13:41 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:41.870123 | orchestrator | 2026-01-28 01:13:41 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:44.897039 | orchestrator | 2026-01-28 01:13:44 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:44.897723 | orchestrator | 2026-01-28 01:13:44 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:44.897777 | orchestrator | 2026-01-28 01:13:44 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:47.939998 | orchestrator | 2026-01-28 01:13:47 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:47.940542 | orchestrator | 2026-01-28 01:13:47 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:47.940582 | orchestrator | 2026-01-28 01:13:47 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:50.985880 | orchestrator | 2026-01-28 01:13:50 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:50.988430 | orchestrator | 2026-01-28 01:13:50 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:50.988476 | orchestrator | 2026-01-28 01:13:50 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:54.029359 | orchestrator | 2026-01-28 01:13:54 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:54.031246 | orchestrator | 2026-01-28 01:13:54 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:54.031296 | orchestrator | 2026-01-28 01:13:54 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:13:57.088733 | orchestrator | 2026-01-28 01:13:57 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:13:57.089149 | orchestrator | 2026-01-28 01:13:57 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:13:57.089214 | orchestrator | 2026-01-28 01:13:57 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:00.148561 | orchestrator | 2026-01-28 01:14:00 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:00.151536 | orchestrator | 2026-01-28 01:14:00 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:00.152020 | orchestrator | 2026-01-28 01:14:00 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:03.190678 | orchestrator | 2026-01-28 01:14:03 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:03.190941 | orchestrator | 2026-01-28 01:14:03 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:03.190969 | orchestrator | 2026-01-28 01:14:03 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:06.221165 | orchestrator | 2026-01-28 01:14:06 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:06.221577 | orchestrator | 2026-01-28 01:14:06 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:06.222488 | orchestrator | 2026-01-28 01:14:06 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:09.274482 | orchestrator | 2026-01-28 01:14:09 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:09.278706 | orchestrator | 2026-01-28 01:14:09 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:09.278819 | orchestrator | 2026-01-28 01:14:09 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:12.318791 | orchestrator | 2026-01-28 01:14:12 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:12.320439 | orchestrator | 2026-01-28 01:14:12 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:12.320761 | orchestrator | 2026-01-28 01:14:12 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:15.365603 | orchestrator | 2026-01-28 01:14:15 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:15.367562 | orchestrator | 2026-01-28 01:14:15 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:15.367631 | orchestrator | 2026-01-28 01:14:15 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:18.414362 | orchestrator | 2026-01-28 01:14:18 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:18.416002 | orchestrator | 2026-01-28 01:14:18 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:18.416471 | orchestrator | 2026-01-28 01:14:18 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:21.462425 | orchestrator | 2026-01-28 01:14:21 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:21.464351 | orchestrator | 2026-01-28 01:14:21 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:21.464378 | orchestrator | 2026-01-28 01:14:21 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:24.515011 | orchestrator | 2026-01-28 01:14:24 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:24.515571 | orchestrator | 2026-01-28 01:14:24 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:24.515589 | orchestrator | 2026-01-28 01:14:24 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:27.552178 | orchestrator | 2026-01-28 01:14:27 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:27.554239 | orchestrator | 2026-01-28 01:14:27 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:27.554335 | orchestrator | 2026-01-28 01:14:27 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:30.578476 | orchestrator | 2026-01-28 01:14:30 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:30.578819 | orchestrator | 2026-01-28 01:14:30 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:30.579549 | orchestrator | 2026-01-28 01:14:30 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:33.611715 | orchestrator | 2026-01-28 01:14:33 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:33.611982 | orchestrator | 2026-01-28 01:14:33 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:33.612006 | orchestrator | 2026-01-28 01:14:33 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:36.633171 | orchestrator | 2026-01-28 01:14:36 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:36.634482 | orchestrator | 2026-01-28 01:14:36 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:36.634517 | orchestrator | 2026-01-28 01:14:36 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:39.671353 | orchestrator | 2026-01-28 01:14:39 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:39.673423 | orchestrator | 2026-01-28 01:14:39 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:39.673670 | orchestrator | 2026-01-28 01:14:39 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:42.717789 | orchestrator | 2026-01-28 01:14:42 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:42.719863 | orchestrator | 2026-01-28 01:14:42 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:42.719974 | orchestrator | 2026-01-28 01:14:42 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:45.770225 | orchestrator | 2026-01-28 01:14:45 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:45.772699 | orchestrator | 2026-01-28 01:14:45 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:45.772890 | orchestrator | 2026-01-28 01:14:45 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:48.814605 | orchestrator | 2026-01-28 01:14:48 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:48.816123 | orchestrator | 2026-01-28 01:14:48 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:48.816175 | orchestrator | 2026-01-28 01:14:48 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:51.868913 | orchestrator | 2026-01-28 01:14:51 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:51.870230 | orchestrator | 2026-01-28 01:14:51 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:51.870472 | orchestrator | 2026-01-28 01:14:51 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:54.917797 | orchestrator | 2026-01-28 01:14:54 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:54.920097 | orchestrator | 2026-01-28 01:14:54 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:54.920203 | orchestrator | 2026-01-28 01:14:54 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:14:57.963591 | orchestrator | 2026-01-28 01:14:57 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:14:57.966743 | orchestrator | 2026-01-28 01:14:57 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:14:57.967256 | orchestrator | 2026-01-28 01:14:57 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:15:01.013658 | orchestrator | 2026-01-28 01:15:01 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:15:01.018818 | orchestrator | 2026-01-28 01:15:01 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:15:01.018897 | orchestrator | 2026-01-28 01:15:01 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:15:04.064439 | orchestrator | 2026-01-28 01:15:04 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:15:04.065796 | orchestrator | 2026-01-28 01:15:04 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:15:04.065856 | orchestrator | 2026-01-28 01:15:04 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:15:07.110107 | orchestrator | 2026-01-28 01:15:07 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:15:07.111292 | orchestrator | 2026-01-28 01:15:07 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:15:07.111329 | orchestrator | 2026-01-28 01:15:07 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:15:10.143332 | orchestrator | 2026-01-28 01:15:10 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:15:10.146412 | orchestrator | 2026-01-28 01:15:10 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:15:10.146511 | orchestrator | 2026-01-28 01:15:10 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:15:13.185439 | orchestrator | 2026-01-28 01:15:13 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:15:13.187734 | orchestrator | 2026-01-28 01:15:13 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state STARTED 2026-01-28 01:15:13.187830 | orchestrator | 2026-01-28 01:15:13 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:15:16.230239 | orchestrator | 2026-01-28 01:15:16 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:15:16.235459 | orchestrator | 2026-01-28 01:15:16 | INFO  | Task 04b994d0-9bf9-4c06-bdc3-3e90a21f18c0 is in state SUCCESS 2026-01-28 01:15:16.237551 | orchestrator | 2026-01-28 01:15:16.237573 | orchestrator | 2026-01-28 01:15:16.237578 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 01:15:16.237583 | orchestrator | 2026-01-28 01:15:16.237588 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 01:15:16.237592 | orchestrator | Wednesday 28 January 2026 01:08:26 +0000 (0:00:00.195) 0:00:00.195 ***** 2026-01-28 01:15:16.237596 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:16.237602 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:15:16.237606 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:15:16.237609 | orchestrator | 2026-01-28 01:15:16.237614 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 01:15:16.237618 | orchestrator | Wednesday 28 January 2026 01:08:26 +0000 (0:00:00.311) 0:00:00.507 ***** 2026-01-28 01:15:16.237622 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-01-28 01:15:16.237626 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-01-28 01:15:16.237630 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-01-28 01:15:16.237634 | orchestrator | 2026-01-28 01:15:16.237638 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-01-28 01:15:16.237641 | orchestrator | 2026-01-28 01:15:16.237660 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-01-28 01:15:16.237664 | orchestrator | Wednesday 28 January 2026 01:08:27 +0000 (0:00:00.631) 0:00:01.138 ***** 2026-01-28 01:15:16.237668 | orchestrator | 2026-01-28 01:15:16.237672 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-01-28 01:15:16.237676 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:16.237680 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:15:16.237683 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:15:16.237687 | orchestrator | 2026-01-28 01:15:16.237691 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:15:16.237695 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:15:16.237701 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:15:16.237704 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:15:16.237708 | orchestrator | 2026-01-28 01:15:16.237712 | orchestrator | 2026-01-28 01:15:16.237716 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:15:16.237719 | orchestrator | Wednesday 28 January 2026 01:11:12 +0000 (0:02:45.740) 0:02:46.878 ***** 2026-01-28 01:15:16.237723 | orchestrator | =============================================================================== 2026-01-28 01:15:16.237727 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 165.74s 2026-01-28 01:15:16.237730 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-01-28 01:15:16.237734 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-01-28 01:15:16.237738 | orchestrator | 2026-01-28 01:15:16.237742 | orchestrator | 2026-01-28 01:15:16.237745 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 01:15:16.237749 | orchestrator | 2026-01-28 01:15:16.237753 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-01-28 01:15:16.237766 | orchestrator | Wednesday 28 January 2026 01:07:17 +0000 (0:00:00.579) 0:00:00.579 ***** 2026-01-28 01:15:16.237770 | orchestrator | changed: [testbed-manager] 2026-01-28 01:15:16.237775 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.237778 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:16.237782 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:16.237786 | orchestrator | changed: [testbed-node-3] 2026-01-28 01:15:16.237789 | orchestrator | changed: [testbed-node-4] 2026-01-28 01:15:16.237793 | orchestrator | changed: [testbed-node-5] 2026-01-28 01:15:16.237797 | orchestrator | 2026-01-28 01:15:16.237801 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 01:15:16.237804 | orchestrator | Wednesday 28 January 2026 01:07:18 +0000 (0:00:00.728) 0:00:01.308 ***** 2026-01-28 01:15:16.237808 | orchestrator | changed: [testbed-manager] 2026-01-28 01:15:16.237812 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.237816 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:16.237819 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:16.237823 | orchestrator | changed: [testbed-node-3] 2026-01-28 01:15:16.237827 | orchestrator | changed: [testbed-node-4] 2026-01-28 01:15:16.237831 | orchestrator | changed: [testbed-node-5] 2026-01-28 01:15:16.237834 | orchestrator | 2026-01-28 01:15:16.237838 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 01:15:16.237842 | orchestrator | Wednesday 28 January 2026 01:07:19 +0000 (0:00:00.731) 0:00:02.039 ***** 2026-01-28 01:15:16.237846 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-01-28 01:15:16.237850 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-01-28 01:15:16.237853 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-01-28 01:15:16.237857 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-01-28 01:15:16.237864 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-01-28 01:15:16.237868 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-01-28 01:15:16.237872 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-01-28 01:15:16.237875 | orchestrator | 2026-01-28 01:15:16.237879 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-01-28 01:15:16.237883 | orchestrator | 2026-01-28 01:15:16.237886 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-28 01:15:16.237890 | orchestrator | Wednesday 28 January 2026 01:07:20 +0000 (0:00:00.833) 0:00:02.872 ***** 2026-01-28 01:15:16.237894 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:15:16.237898 | orchestrator | 2026-01-28 01:15:16.237901 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-01-28 01:15:16.237912 | orchestrator | Wednesday 28 January 2026 01:07:20 +0000 (0:00:00.835) 0:00:03.708 ***** 2026-01-28 01:15:16.237915 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-01-28 01:15:16.237919 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-01-28 01:15:16.237923 | orchestrator | 2026-01-28 01:15:16.237927 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-01-28 01:15:16.237931 | orchestrator | Wednesday 28 January 2026 01:07:24 +0000 (0:00:03.629) 0:00:07.337 ***** 2026-01-28 01:15:16.237934 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-28 01:15:16.237938 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-01-28 01:15:16.237942 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.237946 | orchestrator | 2026-01-28 01:15:16.237949 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-28 01:15:16.237953 | orchestrator | Wednesday 28 January 2026 01:07:28 +0000 (0:00:04.423) 0:00:11.761 ***** 2026-01-28 01:15:16.237957 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.237984 | orchestrator | 2026-01-28 01:15:16.237988 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-01-28 01:15:16.237991 | orchestrator | Wednesday 28 January 2026 01:07:29 +0000 (0:00:00.591) 0:00:12.352 ***** 2026-01-28 01:15:16.237995 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.237999 | orchestrator | 2026-01-28 01:15:16.238003 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-01-28 01:15:16.238006 | orchestrator | Wednesday 28 January 2026 01:07:30 +0000 (0:00:01.369) 0:00:13.722 ***** 2026-01-28 01:15:16.238010 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.238014 | orchestrator | 2026-01-28 01:15:16.238043 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-28 01:15:16.238047 | orchestrator | Wednesday 28 January 2026 01:07:33 +0000 (0:00:02.963) 0:00:16.685 ***** 2026-01-28 01:15:16.238051 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.238054 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.238058 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.238062 | orchestrator | 2026-01-28 01:15:16.238066 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-28 01:15:16.238069 | orchestrator | Wednesday 28 January 2026 01:07:34 +0000 (0:00:00.756) 0:00:17.442 ***** 2026-01-28 01:15:16.238073 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:16.238077 | orchestrator | 2026-01-28 01:15:16.238080 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-01-28 01:15:16.238084 | orchestrator | Wednesday 28 January 2026 01:08:04 +0000 (0:00:29.851) 0:00:47.293 ***** 2026-01-28 01:15:16.238088 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.238092 | orchestrator | 2026-01-28 01:15:16.238095 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-28 01:15:16.238099 | orchestrator | Wednesday 28 January 2026 01:08:17 +0000 (0:00:13.356) 0:01:00.649 ***** 2026-01-28 01:15:16.238103 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:16.238107 | orchestrator | 2026-01-28 01:15:16.238114 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-28 01:15:16.238118 | orchestrator | Wednesday 28 January 2026 01:08:30 +0000 (0:00:13.012) 0:01:13.661 ***** 2026-01-28 01:15:16.238121 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:16.238125 | orchestrator | 2026-01-28 01:15:16.238129 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-01-28 01:15:16.238133 | orchestrator | Wednesday 28 January 2026 01:08:32 +0000 (0:00:01.192) 0:01:14.853 ***** 2026-01-28 01:15:16.238139 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.238143 | orchestrator | 2026-01-28 01:15:16.238147 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-28 01:15:16.238150 | orchestrator | Wednesday 28 January 2026 01:08:32 +0000 (0:00:00.471) 0:01:15.325 ***** 2026-01-28 01:15:16.238154 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:15:16.238158 | orchestrator | 2026-01-28 01:15:16.238162 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-01-28 01:15:16.238166 | orchestrator | Wednesday 28 January 2026 01:08:33 +0000 (0:00:00.552) 0:01:15.877 ***** 2026-01-28 01:15:16.238170 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:16.238175 | orchestrator | 2026-01-28 01:15:16.238179 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-28 01:15:16.238183 | orchestrator | Wednesday 28 January 2026 01:08:50 +0000 (0:00:17.139) 0:01:33.017 ***** 2026-01-28 01:15:16.238187 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.238192 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.238196 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.238200 | orchestrator | 2026-01-28 01:15:16.238204 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-01-28 01:15:16.238209 | orchestrator | 2026-01-28 01:15:16.238213 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-01-28 01:15:16.238217 | orchestrator | Wednesday 28 January 2026 01:08:50 +0000 (0:00:00.394) 0:01:33.411 ***** 2026-01-28 01:15:16.238222 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:15:16.238226 | orchestrator | 2026-01-28 01:15:16.238230 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-01-28 01:15:16.238235 | orchestrator | Wednesday 28 January 2026 01:08:51 +0000 (0:00:00.635) 0:01:34.047 ***** 2026-01-28 01:15:16.238239 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.238243 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.238247 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.238252 | orchestrator | 2026-01-28 01:15:16.238256 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-01-28 01:15:16.238259 | orchestrator | Wednesday 28 January 2026 01:08:53 +0000 (0:00:01.958) 0:01:36.005 ***** 2026-01-28 01:15:16.238263 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.238267 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.238270 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.238274 | orchestrator | 2026-01-28 01:15:16.238278 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-28 01:15:16.238284 | orchestrator | Wednesday 28 January 2026 01:08:55 +0000 (0:00:02.716) 0:01:38.722 ***** 2026-01-28 01:15:16.238288 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.238292 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.238296 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.238299 | orchestrator | 2026-01-28 01:15:16.238303 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-28 01:15:16.238307 | orchestrator | Wednesday 28 January 2026 01:08:56 +0000 (0:00:00.564) 0:01:39.286 ***** 2026-01-28 01:15:16.238311 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-28 01:15:16.238314 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.238318 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-28 01:15:16.238322 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.238330 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-01-28 01:15:16.238334 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-01-28 01:15:16.238338 | orchestrator | 2026-01-28 01:15:16.238342 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-01-28 01:15:16.238348 | orchestrator | Wednesday 28 January 2026 01:09:05 +0000 (0:00:09.393) 0:01:48.680 ***** 2026-01-28 01:15:16.238463 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.238471 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.238477 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.238483 | orchestrator | 2026-01-28 01:15:16.238489 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-01-28 01:15:16.238495 | orchestrator | Wednesday 28 January 2026 01:09:06 +0000 (0:00:00.762) 0:01:49.442 ***** 2026-01-28 01:15:16.238501 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-01-28 01:15:16.238507 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.238513 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-01-28 01:15:16.238519 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.238525 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-01-28 01:15:16.238531 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.238537 | orchestrator | 2026-01-28 01:15:16.238546 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-28 01:15:16.238551 | orchestrator | Wednesday 28 January 2026 01:09:07 +0000 (0:00:00.968) 0:01:50.411 ***** 2026-01-28 01:15:16.238555 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.238559 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.238563 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.238567 | orchestrator | 2026-01-28 01:15:16.238570 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-01-28 01:15:16.238574 | orchestrator | Wednesday 28 January 2026 01:09:08 +0000 (0:00:00.964) 0:01:51.376 ***** 2026-01-28 01:15:16.238578 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.238582 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.238585 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.238589 | orchestrator | 2026-01-28 01:15:16.238593 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-01-28 01:15:16.238596 | orchestrator | Wednesday 28 January 2026 01:09:09 +0000 (0:00:00.885) 0:01:52.261 ***** 2026-01-28 01:15:16.238600 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.238604 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.238607 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.238611 | orchestrator | 2026-01-28 01:15:16.238615 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-01-28 01:15:16.238622 | orchestrator | Wednesday 28 January 2026 01:09:11 +0000 (0:00:02.100) 0:01:54.362 ***** 2026-01-28 01:15:16.238626 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.238629 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.238633 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:16.238637 | orchestrator | 2026-01-28 01:15:16.238641 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-28 01:15:16.238645 | orchestrator | Wednesday 28 January 2026 01:09:31 +0000 (0:00:20.371) 0:02:14.734 ***** 2026-01-28 01:15:16.238649 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.238653 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.238657 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:16.238660 | orchestrator | 2026-01-28 01:15:16.238664 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-28 01:15:16.238668 | orchestrator | Wednesday 28 January 2026 01:09:44 +0000 (0:00:12.401) 0:02:27.135 ***** 2026-01-28 01:15:16.238672 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:16.238676 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.238680 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.238683 | orchestrator | 2026-01-28 01:15:16.238692 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-01-28 01:15:16.238696 | orchestrator | Wednesday 28 January 2026 01:09:45 +0000 (0:00:00.966) 0:02:28.102 ***** 2026-01-28 01:15:16.238700 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.238704 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.238708 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.238712 | orchestrator | 2026-01-28 01:15:16.238715 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-01-28 01:15:16.238719 | orchestrator | Wednesday 28 January 2026 01:09:58 +0000 (0:00:12.878) 0:02:40.981 ***** 2026-01-28 01:15:16.238723 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.238727 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.238731 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.238735 | orchestrator | 2026-01-28 01:15:16.238739 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-01-28 01:15:16.238743 | orchestrator | Wednesday 28 January 2026 01:09:59 +0000 (0:00:01.033) 0:02:42.014 ***** 2026-01-28 01:15:16.238746 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.238750 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.238754 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.238758 | orchestrator | 2026-01-28 01:15:16.238762 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-01-28 01:15:16.238766 | orchestrator | 2026-01-28 01:15:16.238769 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-28 01:15:16.238773 | orchestrator | Wednesday 28 January 2026 01:09:59 +0000 (0:00:00.584) 0:02:42.598 ***** 2026-01-28 01:15:16.238783 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:15:16.238787 | orchestrator | 2026-01-28 01:15:16.238791 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-01-28 01:15:16.238795 | orchestrator | Wednesday 28 January 2026 01:10:00 +0000 (0:00:00.577) 0:02:43.175 ***** 2026-01-28 01:15:16.238799 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-01-28 01:15:16.238803 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-01-28 01:15:16.238807 | orchestrator | 2026-01-28 01:15:16.238811 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-01-28 01:15:16.238814 | orchestrator | Wednesday 28 January 2026 01:10:03 +0000 (0:00:03.165) 0:02:46.341 ***** 2026-01-28 01:15:16.238818 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-01-28 01:15:16.238823 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-01-28 01:15:16.238827 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-01-28 01:15:16.238831 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-01-28 01:15:16.238835 | orchestrator | 2026-01-28 01:15:16.238839 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-01-28 01:15:16.238843 | orchestrator | Wednesday 28 January 2026 01:10:09 +0000 (0:00:06.446) 0:02:52.788 ***** 2026-01-28 01:15:16.238847 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-28 01:15:16.238851 | orchestrator | 2026-01-28 01:15:16.238855 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-01-28 01:15:16.238859 | orchestrator | Wednesday 28 January 2026 01:10:13 +0000 (0:00:03.553) 0:02:56.342 ***** 2026-01-28 01:15:16.238863 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-28 01:15:16.238866 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-01-28 01:15:16.238870 | orchestrator | 2026-01-28 01:15:16.238874 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-01-28 01:15:16.238878 | orchestrator | Wednesday 28 January 2026 01:10:17 +0000 (0:00:03.672) 0:03:00.015 ***** 2026-01-28 01:15:16.238885 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-28 01:15:16.238889 | orchestrator | 2026-01-28 01:15:16.238893 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-01-28 01:15:16.238897 | orchestrator | Wednesday 28 January 2026 01:10:19 +0000 (0:00:02.794) 0:03:02.809 ***** 2026-01-28 01:15:16.238901 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-01-28 01:15:16.238905 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-01-28 01:15:16.238908 | orchestrator | 2026-01-28 01:15:16.238912 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-01-28 01:15:16.238916 | orchestrator | Wednesday 28 January 2026 01:10:25 +0000 (0:00:05.839) 0:03:08.648 ***** 2026-01-28 01:15:16.238926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:16.238938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:16.238944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:16.238954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.238975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.238980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.238984 | orchestrator | 2026-01-28 01:15:16.238988 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-01-28 01:15:16.238993 | orchestrator | Wednesday 28 January 2026 01:10:27 +0000 (0:00:01.307) 0:03:09.956 ***** 2026-01-28 01:15:16.238997 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.239000 | orchestrator | 2026-01-28 01:15:16.239004 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-01-28 01:15:16.239008 | orchestrator | Wednesday 28 January 2026 01:10:27 +0000 (0:00:00.131) 0:03:10.087 ***** 2026-01-28 01:15:16.239012 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.239016 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.239020 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.239024 | orchestrator | 2026-01-28 01:15:16.239031 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-01-28 01:15:16.239035 | orchestrator | Wednesday 28 January 2026 01:10:27 +0000 (0:00:00.290) 0:03:10.378 ***** 2026-01-28 01:15:16.239039 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-01-28 01:15:16.239043 | orchestrator | 2026-01-28 01:15:16.239047 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-01-28 01:15:16.239050 | orchestrator | Wednesday 28 January 2026 01:10:28 +0000 (0:00:00.893) 0:03:11.271 ***** 2026-01-28 01:15:16.239054 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.239058 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.239062 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.239066 | orchestrator | 2026-01-28 01:15:16.239070 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-01-28 01:15:16.239078 | orchestrator | Wednesday 28 January 2026 01:10:28 +0000 (0:00:00.291) 0:03:11.563 ***** 2026-01-28 01:15:16.239082 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:15:16.239086 | orchestrator | 2026-01-28 01:15:16.239090 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-28 01:15:16.239094 | orchestrator | Wednesday 28 January 2026 01:10:29 +0000 (0:00:00.576) 0:03:12.139 ***** 2026-01-28 01:15:16.239098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:16.239105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:16.239112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:16.239124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.239128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.239135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.239139 | orchestrator | 2026-01-28 01:15:16.239143 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-28 01:15:16.239147 | orchestrator | Wednesday 28 January 2026 01:10:31 +0000 (0:00:02.415) 0:03:14.555 ***** 2026-01-28 01:15:16.239151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-28 01:15:16.239161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.239168 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.239172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-28 01:15:16.239177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.239181 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.239187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-28 01:15:16.239192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.239199 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.239203 | orchestrator | 2026-01-28 01:15:16.239358 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-28 01:15:16.239366 | orchestrator | Wednesday 28 January 2026 01:10:32 +0000 (0:00:00.642) 0:03:15.197 ***** 2026-01-28 01:15:16.239370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-28 01:15:16.239375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.239379 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.239386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-28 01:15:16.239391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.239399 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.239408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-28 01:15:16.239412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.239417 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.239421 | orchestrator | 2026-01-28 01:15:16.239425 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-01-28 01:15:16.239429 | orchestrator | Wednesday 28 January 2026 01:10:33 +0000 (0:00:00.798) 0:03:15.995 ***** 2026-01-28 01:15:16.239435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:16.239443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:16.239453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:16.239458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.239464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.239468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.239472 | orchestrator | 2026-01-28 01:15:16.239479 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-01-28 01:15:16.239483 | orchestrator | Wednesday 28 January 2026 01:10:35 +0000 (0:00:02.437) 0:03:18.433 ***** 2026-01-28 01:15:16.239491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:16.239496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:16.239502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:16.239507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.239517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.239521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.239526 | orchestrator | 2026-01-28 01:15:16.239530 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-01-28 01:15:16.239534 | orchestrator | Wednesday 28 January 2026 01:10:41 +0000 (0:00:05.518) 0:03:23.951 ***** 2026-01-28 01:15:16.239538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-28 01:15:16.239544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.239549 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.239555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-28 01:15:16.239562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.239566 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.239570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-01-28 01:15:16.239578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.239583 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.239587 | orchestrator | 2026-01-28 01:15:16.239591 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-01-28 01:15:16.239595 | orchestrator | Wednesday 28 January 2026 01:10:41 +0000 (0:00:00.610) 0:03:24.562 ***** 2026-01-28 01:15:16.239602 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.239606 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:16.239610 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:16.239614 | orchestrator | 2026-01-28 01:15:16.239618 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-01-28 01:15:16.239622 | orchestrator | Wednesday 28 January 2026 01:10:43 +0000 (0:00:01.469) 0:03:26.031 ***** 2026-01-28 01:15:16.239626 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.239630 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.239634 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.239638 | orchestrator | 2026-01-28 01:15:16.239642 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-01-28 01:15:16.239646 | orchestrator | Wednesday 28 January 2026 01:10:43 +0000 (0:00:00.320) 0:03:26.351 ***** 2026-01-28 01:15:16.239652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:16.239656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:16.239663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:16.239670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.239675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.239682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.239745 | orchestrator | 2026-01-28 01:15:16.239750 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-28 01:15:16.239754 | orchestrator | Wednesday 28 January 2026 01:10:45 +0000 (0:00:02.446) 0:03:28.798 ***** 2026-01-28 01:15:16.239758 | orchestrator | 2026-01-28 01:15:16.239762 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-28 01:15:16.239766 | orchestrator | Wednesday 28 January 2026 01:10:46 +0000 (0:00:00.179) 0:03:28.977 ***** 2026-01-28 01:15:16.239770 | orchestrator | 2026-01-28 01:15:16.239774 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-01-28 01:15:16.239778 | orchestrator | Wednesday 28 January 2026 01:10:46 +0000 (0:00:00.142) 0:03:29.120 ***** 2026-01-28 01:15:16.239782 | orchestrator | 2026-01-28 01:15:16.239786 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-01-28 01:15:16.239790 | orchestrator | Wednesday 28 January 2026 01:10:46 +0000 (0:00:00.142) 0:03:29.262 ***** 2026-01-28 01:15:16.239794 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.239798 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:16.239802 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:16.239806 | orchestrator | 2026-01-28 01:15:16.239810 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-01-28 01:15:16.239814 | orchestrator | Wednesday 28 January 2026 01:11:05 +0000 (0:00:19.299) 0:03:48.561 ***** 2026-01-28 01:15:16.239818 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.239822 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:16.239830 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:16.239834 | orchestrator | 2026-01-28 01:15:16.239838 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-01-28 01:15:16.239842 | orchestrator | 2026-01-28 01:15:16.239846 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-28 01:15:16.239850 | orchestrator | Wednesday 28 January 2026 01:11:15 +0000 (0:00:09.953) 0:03:58.515 ***** 2026-01-28 01:15:16.239854 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:15:16.239859 | orchestrator | 2026-01-28 01:15:16.239863 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-28 01:15:16.239867 | orchestrator | Wednesday 28 January 2026 01:11:16 +0000 (0:00:01.261) 0:03:59.776 ***** 2026-01-28 01:15:16.239871 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.239875 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.239882 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.239886 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.239890 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.239894 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.239897 | orchestrator | 2026-01-28 01:15:16.239901 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-01-28 01:15:16.239905 | orchestrator | Wednesday 28 January 2026 01:11:17 +0000 (0:00:00.651) 0:04:00.428 ***** 2026-01-28 01:15:16.239909 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.239913 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.239917 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.239921 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 01:15:16.239925 | orchestrator | 2026-01-28 01:15:16.239929 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-01-28 01:15:16.239933 | orchestrator | Wednesday 28 January 2026 01:11:18 +0000 (0:00:01.107) 0:04:01.535 ***** 2026-01-28 01:15:16.239937 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-01-28 01:15:16.239941 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-01-28 01:15:16.239945 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-01-28 01:15:16.239949 | orchestrator | 2026-01-28 01:15:16.239953 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-01-28 01:15:16.239957 | orchestrator | Wednesday 28 January 2026 01:11:19 +0000 (0:00:00.670) 0:04:02.206 ***** 2026-01-28 01:15:16.239982 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-01-28 01:15:16.239986 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-01-28 01:15:16.239991 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-01-28 01:15:16.239996 | orchestrator | 2026-01-28 01:15:16.240000 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-01-28 01:15:16.240004 | orchestrator | Wednesday 28 January 2026 01:11:20 +0000 (0:00:01.464) 0:04:03.670 ***** 2026-01-28 01:15:16.240009 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-01-28 01:15:16.240013 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.240018 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-01-28 01:15:16.240022 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.240026 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-01-28 01:15:16.240031 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.240035 | orchestrator | 2026-01-28 01:15:16.240040 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-01-28 01:15:16.240044 | orchestrator | Wednesday 28 January 2026 01:11:21 +0000 (0:00:00.623) 0:04:04.294 ***** 2026-01-28 01:15:16.240166 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-28 01:15:16.240171 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-28 01:15:16.240180 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.240185 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-28 01:15:16.240189 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-28 01:15:16.240194 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.240198 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-01-28 01:15:16.240203 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-01-28 01:15:16.240207 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.240212 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-28 01:15:16.240216 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-28 01:15:16.240221 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-28 01:15:16.240234 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-28 01:15:16.240238 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-01-28 01:15:16.240243 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-01-28 01:15:16.240248 | orchestrator | 2026-01-28 01:15:16.240252 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-01-28 01:15:16.240257 | orchestrator | Wednesday 28 January 2026 01:11:23 +0000 (0:00:02.186) 0:04:06.480 ***** 2026-01-28 01:15:16.240261 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.240266 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.240270 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.240275 | orchestrator | changed: [testbed-node-3] 2026-01-28 01:15:16.240279 | orchestrator | changed: [testbed-node-4] 2026-01-28 01:15:16.240284 | orchestrator | changed: [testbed-node-5] 2026-01-28 01:15:16.240288 | orchestrator | 2026-01-28 01:15:16.240293 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-01-28 01:15:16.240297 | orchestrator | Wednesday 28 January 2026 01:11:24 +0000 (0:00:01.185) 0:04:07.666 ***** 2026-01-28 01:15:16.240302 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.240326 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.240330 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.240335 | orchestrator | changed: [testbed-node-3] 2026-01-28 01:15:16.240339 | orchestrator | changed: [testbed-node-4] 2026-01-28 01:15:16.240344 | orchestrator | changed: [testbed-node-5] 2026-01-28 01:15:16.240348 | orchestrator | 2026-01-28 01:15:16.240353 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-01-28 01:15:16.240357 | orchestrator | Wednesday 28 January 2026 01:11:26 +0000 (0:00:01.955) 0:04:09.621 ***** 2026-01-28 01:15:16.240364 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240370 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240381 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240386 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240402 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240416 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240421 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240454 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240468 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240472 | orchestrator | 2026-01-28 01:15:16.240476 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-28 01:15:16.240480 | orchestrator | Wednesday 28 January 2026 01:11:28 +0000 (0:00:02.217) 0:04:11.839 ***** 2026-01-28 01:15:16.240484 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:15:16.240489 | orchestrator | 2026-01-28 01:15:16.240493 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-01-28 01:15:16.240497 | orchestrator | Wednesday 28 January 2026 01:11:30 +0000 (0:00:01.314) 0:04:13.154 ***** 2026-01-28 01:15:16.240501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240507 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240518 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240534 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240538 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240553 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240568 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240591 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.240618 | orchestrator | 2026-01-28 01:15:16.240623 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-01-28 01:15:16.240627 | orchestrator | Wednesday 28 January 2026 01:11:33 +0000 (0:00:03.210) 0:04:16.365 ***** 2026-01-28 01:15:16.240631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-28 01:15:16.240637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-28 01:15:16.240642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-28 01:15:16.240646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.240654 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.240661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-28 01:15:16.240665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.240669 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.240676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-28 01:15:16.240680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-28 01:15:16.240684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.240691 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.240698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-28 01:15:16.240702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.240735 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.240740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-28 01:15:16.240943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.240950 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.240955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-28 01:15:16.240998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.241015 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.241021 | orchestrator | 2026-01-28 01:15:16.241026 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-01-28 01:15:16.241030 | orchestrator | Wednesday 28 January 2026 01:11:35 +0000 (0:00:01.582) 0:04:17.947 ***** 2026-01-28 01:15:16.241039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-28 01:15:16.241044 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-28 01:15:16.241049 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.241053 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.241061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-28 01:15:16.241065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-28 01:15:16.241075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.241079 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.241085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-28 01:15:16.241089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-28 01:15:16.241098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.241102 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.241106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-28 01:15:16.241113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.241117 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.241123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-28 01:15:16.241128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.241132 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.241136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-28 01:15:16.241142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.241146 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.241150 | orchestrator | 2026-01-28 01:15:16.241166 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-28 01:15:16.241170 | orchestrator | Wednesday 28 January 2026 01:11:37 +0000 (0:00:02.220) 0:04:20.167 ***** 2026-01-28 01:15:16.241174 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.241178 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.241182 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.241186 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-01-28 01:15:16.241190 | orchestrator | 2026-01-28 01:15:16.241194 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-01-28 01:15:16.241201 | orchestrator | Wednesday 28 January 2026 01:11:38 +0000 (0:00:01.015) 0:04:21.183 ***** 2026-01-28 01:15:16.241205 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-28 01:15:16.241209 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-28 01:15:16.241213 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-28 01:15:16.241217 | orchestrator | 2026-01-28 01:15:16.241221 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-01-28 01:15:16.241225 | orchestrator | Wednesday 28 January 2026 01:11:39 +0000 (0:00:00.935) 0:04:22.118 ***** 2026-01-28 01:15:16.241229 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-28 01:15:16.241233 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-01-28 01:15:16.241237 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-01-28 01:15:16.241241 | orchestrator | 2026-01-28 01:15:16.241245 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-01-28 01:15:16.241248 | orchestrator | Wednesday 28 January 2026 01:11:40 +0000 (0:00:00.892) 0:04:23.011 ***** 2026-01-28 01:15:16.241252 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:15:16.241257 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:15:16.241261 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:15:16.241265 | orchestrator | 2026-01-28 01:15:16.241269 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-01-28 01:15:16.241273 | orchestrator | Wednesday 28 January 2026 01:11:40 +0000 (0:00:00.511) 0:04:23.523 ***** 2026-01-28 01:15:16.241277 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:15:16.241281 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:15:16.241285 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:15:16.241289 | orchestrator | 2026-01-28 01:15:16.241293 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-01-28 01:15:16.241297 | orchestrator | Wednesday 28 January 2026 01:11:41 +0000 (0:00:00.780) 0:04:24.304 ***** 2026-01-28 01:15:16.241301 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-28 01:15:16.241305 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-28 01:15:16.241309 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-28 01:15:16.241313 | orchestrator | 2026-01-28 01:15:16.241316 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-01-28 01:15:16.241320 | orchestrator | Wednesday 28 January 2026 01:11:42 +0000 (0:00:01.168) 0:04:25.472 ***** 2026-01-28 01:15:16.241324 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-28 01:15:16.241328 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-28 01:15:16.241336 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-28 01:15:16.241340 | orchestrator | 2026-01-28 01:15:16.241344 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-01-28 01:15:16.241348 | orchestrator | Wednesday 28 January 2026 01:11:43 +0000 (0:00:01.105) 0:04:26.578 ***** 2026-01-28 01:15:16.241352 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-01-28 01:15:16.241356 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-01-28 01:15:16.241360 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-01-28 01:15:16.241364 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-01-28 01:15:16.241369 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-01-28 01:15:16.241373 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-01-28 01:15:16.241377 | orchestrator | 2026-01-28 01:15:16.241382 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-01-28 01:15:16.241386 | orchestrator | Wednesday 28 January 2026 01:11:47 +0000 (0:00:03.556) 0:04:30.134 ***** 2026-01-28 01:15:16.241390 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.241395 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.241399 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.241403 | orchestrator | 2026-01-28 01:15:16.241408 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-01-28 01:15:16.241415 | orchestrator | Wednesday 28 January 2026 01:11:47 +0000 (0:00:00.552) 0:04:30.687 ***** 2026-01-28 01:15:16.241419 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.241424 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.241428 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.241432 | orchestrator | 2026-01-28 01:15:16.241437 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-01-28 01:15:16.241441 | orchestrator | Wednesday 28 January 2026 01:11:48 +0000 (0:00:00.305) 0:04:30.992 ***** 2026-01-28 01:15:16.241445 | orchestrator | changed: [testbed-node-3] 2026-01-28 01:15:16.241450 | orchestrator | changed: [testbed-node-4] 2026-01-28 01:15:16.241454 | orchestrator | changed: [testbed-node-5] 2026-01-28 01:15:16.241458 | orchestrator | 2026-01-28 01:15:16.241463 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-01-28 01:15:16.241467 | orchestrator | Wednesday 28 January 2026 01:11:49 +0000 (0:00:01.134) 0:04:32.127 ***** 2026-01-28 01:15:16.241472 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-28 01:15:16.241479 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-28 01:15:16.241484 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-01-28 01:15:16.241488 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-28 01:15:16.241493 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-28 01:15:16.241498 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-01-28 01:15:16.241505 | orchestrator | 2026-01-28 01:15:16.241511 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-01-28 01:15:16.241518 | orchestrator | Wednesday 28 January 2026 01:11:52 +0000 (0:00:03.090) 0:04:35.218 ***** 2026-01-28 01:15:16.241525 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-28 01:15:16.241532 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-28 01:15:16.241539 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-28 01:15:16.241545 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-01-28 01:15:16.241553 | orchestrator | changed: [testbed-node-3] 2026-01-28 01:15:16.241558 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-01-28 01:15:16.241562 | orchestrator | changed: [testbed-node-4] 2026-01-28 01:15:16.241567 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-01-28 01:15:16.241571 | orchestrator | changed: [testbed-node-5] 2026-01-28 01:15:16.241576 | orchestrator | 2026-01-28 01:15:16.241581 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-01-28 01:15:16.241586 | orchestrator | Wednesday 28 January 2026 01:11:55 +0000 (0:00:03.291) 0:04:38.510 ***** 2026-01-28 01:15:16.241591 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.241596 | orchestrator | 2026-01-28 01:15:16.241601 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-01-28 01:15:16.241605 | orchestrator | Wednesday 28 January 2026 01:11:55 +0000 (0:00:00.118) 0:04:38.628 ***** 2026-01-28 01:15:16.241610 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.241615 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.241620 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.241625 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.241629 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.241634 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.241639 | orchestrator | 2026-01-28 01:15:16.241644 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-01-28 01:15:16.241652 | orchestrator | Wednesday 28 January 2026 01:11:56 +0000 (0:00:00.596) 0:04:39.224 ***** 2026-01-28 01:15:16.241657 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-01-28 01:15:16.241662 | orchestrator | 2026-01-28 01:15:16.241667 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-01-28 01:15:16.241672 | orchestrator | Wednesday 28 January 2026 01:11:57 +0000 (0:00:00.672) 0:04:39.897 ***** 2026-01-28 01:15:16.241677 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.241682 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.241687 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.241694 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.241699 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.241703 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.241707 | orchestrator | 2026-01-28 01:15:16.241712 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-01-28 01:15:16.241716 | orchestrator | Wednesday 28 January 2026 01:11:57 +0000 (0:00:00.764) 0:04:40.662 ***** 2026-01-28 01:15:16.241721 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241730 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241735 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241759 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241765 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241770 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241794 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241800 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241807 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241812 | orchestrator | 2026-01-28 01:15:16.241816 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-01-28 01:15:16.241821 | orchestrator | Wednesday 28 January 2026 01:12:01 +0000 (0:00:03.209) 0:04:43.871 ***** 2026-01-28 01:15:16.241829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-28 01:15:16.241836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-28 01:15:16.241840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-28 01:15:16.241845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-28 01:15:16.241853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-28 01:15:16.241857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-28 01:15:16.241865 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241876 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241883 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.241915 | orchestrator | 2026-01-28 01:15:16.241919 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-01-28 01:15:16.241924 | orchestrator | Wednesday 28 January 2026 01:12:06 +0000 (0:00:05.949) 0:04:49.821 ***** 2026-01-28 01:15:16.241928 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.241933 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.241937 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.241941 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.241946 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.241950 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.241954 | orchestrator | 2026-01-28 01:15:16.241973 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-01-28 01:15:16.241978 | orchestrator | Wednesday 28 January 2026 01:12:08 +0000 (0:00:01.586) 0:04:51.407 ***** 2026-01-28 01:15:16.241983 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-28 01:15:16.241987 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-28 01:15:16.241991 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-01-28 01:15:16.241996 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-28 01:15:16.242006 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-28 01:15:16.242011 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-01-28 01:15:16.242040 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-28 01:15:16.242046 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.242053 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-28 01:15:16.242060 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.242068 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-01-28 01:15:16.242075 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.242082 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-28 01:15:16.242090 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-28 01:15:16.242097 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-01-28 01:15:16.242103 | orchestrator | 2026-01-28 01:15:16.242110 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-01-28 01:15:16.242117 | orchestrator | Wednesday 28 January 2026 01:12:11 +0000 (0:00:03.288) 0:04:54.695 ***** 2026-01-28 01:15:16.242124 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.242131 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.242138 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.242145 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.242152 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.242159 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.242166 | orchestrator | 2026-01-28 01:15:16.242174 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-01-28 01:15:16.242181 | orchestrator | Wednesday 28 January 2026 01:12:12 +0000 (0:00:00.640) 0:04:55.335 ***** 2026-01-28 01:15:16.242188 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-28 01:15:16.242195 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-28 01:15:16.242199 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-01-28 01:15:16.242204 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-28 01:15:16.242208 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-28 01:15:16.242213 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-28 01:15:16.242217 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-01-28 01:15:16.242226 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-28 01:15:16.242231 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-01-28 01:15:16.242235 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-28 01:15:16.242239 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.242244 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-28 01:15:16.242248 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.242252 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-01-28 01:15:16.242256 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.242266 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-28 01:15:16.242270 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-28 01:15:16.242274 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-01-28 01:15:16.242278 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-28 01:15:16.242283 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-28 01:15:16.242287 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-01-28 01:15:16.242291 | orchestrator | 2026-01-28 01:15:16.242295 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-01-28 01:15:16.242300 | orchestrator | Wednesday 28 January 2026 01:12:17 +0000 (0:00:04.954) 0:05:00.289 ***** 2026-01-28 01:15:16.242304 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-28 01:15:16.242308 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-28 01:15:16.242316 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-01-28 01:15:16.242320 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-28 01:15:16.242325 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-28 01:15:16.242329 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-28 01:15:16.242333 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-01-28 01:15:16.242338 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-28 01:15:16.242342 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-01-28 01:15:16.242346 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-28 01:15:16.242351 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-28 01:15:16.242355 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-01-28 01:15:16.242359 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-28 01:15:16.242364 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-28 01:15:16.242368 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-28 01:15:16.242372 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.242377 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-28 01:15:16.242381 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.242385 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-01-28 01:15:16.242390 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.242394 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-01-28 01:15:16.242398 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-28 01:15:16.242403 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-28 01:15:16.242407 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-01-28 01:15:16.242411 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-28 01:15:16.242416 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-28 01:15:16.242423 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-01-28 01:15:16.242427 | orchestrator | 2026-01-28 01:15:16.242431 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-01-28 01:15:16.242436 | orchestrator | Wednesday 28 January 2026 01:12:24 +0000 (0:00:06.933) 0:05:07.223 ***** 2026-01-28 01:15:16.242442 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.242447 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.242451 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.242456 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.242460 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.242464 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.242468 | orchestrator | 2026-01-28 01:15:16.242473 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-01-28 01:15:16.242477 | orchestrator | Wednesday 28 January 2026 01:12:25 +0000 (0:00:00.913) 0:05:08.137 ***** 2026-01-28 01:15:16.242481 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.242486 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.242490 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.242494 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.242499 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.242503 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.242507 | orchestrator | 2026-01-28 01:15:16.242511 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-01-28 01:15:16.242516 | orchestrator | Wednesday 28 January 2026 01:12:25 +0000 (0:00:00.657) 0:05:08.794 ***** 2026-01-28 01:15:16.242520 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.242524 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.242529 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.242533 | orchestrator | changed: [testbed-node-3] 2026-01-28 01:15:16.242537 | orchestrator | changed: [testbed-node-4] 2026-01-28 01:15:16.242542 | orchestrator | changed: [testbed-node-5] 2026-01-28 01:15:16.242546 | orchestrator | 2026-01-28 01:15:16.242565 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-01-28 01:15:16.242569 | orchestrator | Wednesday 28 January 2026 01:12:28 +0000 (0:00:02.169) 0:05:10.964 ***** 2026-01-28 01:15:16.242577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-28 01:15:16.242582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-28 01:15:16.242587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.242595 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.242600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-28 01:15:16.242613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-28 01:15:16.242618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.242623 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.242638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-28 01:15:16.242643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.242709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-01-28 01:15:16.242722 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.242729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-01-28 01:15:16.242734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.242738 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.242743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-28 01:15:16.242751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.242759 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.242764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-01-28 01:15:16.242768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-01-28 01:15:16.242772 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.242777 | orchestrator | 2026-01-28 01:15:16.242781 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-01-28 01:15:16.242786 | orchestrator | Wednesday 28 January 2026 01:12:29 +0000 (0:00:01.302) 0:05:12.266 ***** 2026-01-28 01:15:16.242790 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-28 01:15:16.242795 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-28 01:15:16.242799 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.242803 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-28 01:15:16.242810 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-28 01:15:16.242814 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.242819 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-28 01:15:16.242823 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-28 01:15:16.242827 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.242831 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-28 01:15:16.242836 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-28 01:15:16.242840 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.242844 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-28 01:15:16.242849 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-28 01:15:16.242853 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.242857 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-28 01:15:16.242862 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-28 01:15:16.242866 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.242870 | orchestrator | 2026-01-28 01:15:16.242875 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-01-28 01:15:16.242879 | orchestrator | Wednesday 28 January 2026 01:12:30 +0000 (0:00:00.887) 0:05:13.154 ***** 2026-01-28 01:15:16.242883 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-28 01:15:16.242894 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-28 01:15:16.242899 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-01-28 01:15:16.242906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-28 01:15:16.242911 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-28 01:15:16.242915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-28 01:15:16.242923 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'heal2026-01-28 01:15:16 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:15:16.242931 | orchestrator | thcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-28 01:15:16.242935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-01-28 01:15:16.242940 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-01-28 01:15:16.242945 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.242951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.242956 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.243004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.243010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.243015 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:16.243019 | orchestrator | 2026-01-28 01:15:16.243023 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-01-28 01:15:16.243028 | orchestrator | Wednesday 28 January 2026 01:12:33 +0000 (0:00:02.822) 0:05:15.977 ***** 2026-01-28 01:15:16.243032 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.243037 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.243041 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.243045 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.243049 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.243054 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.243058 | orchestrator | 2026-01-28 01:15:16.243062 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-28 01:15:16.243067 | orchestrator | Wednesday 28 January 2026 01:12:33 +0000 (0:00:00.818) 0:05:16.795 ***** 2026-01-28 01:15:16.243071 | orchestrator | 2026-01-28 01:15:16.243075 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-28 01:15:16.243080 | orchestrator | Wednesday 28 January 2026 01:12:34 +0000 (0:00:00.133) 0:05:16.929 ***** 2026-01-28 01:15:16.243084 | orchestrator | 2026-01-28 01:15:16.243088 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-28 01:15:16.243095 | orchestrator | Wednesday 28 January 2026 01:12:34 +0000 (0:00:00.130) 0:05:17.060 ***** 2026-01-28 01:15:16.243100 | orchestrator | 2026-01-28 01:15:16.243104 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-28 01:15:16.243109 | orchestrator | Wednesday 28 January 2026 01:12:34 +0000 (0:00:00.135) 0:05:17.195 ***** 2026-01-28 01:15:16.243113 | orchestrator | 2026-01-28 01:15:16.243117 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-28 01:15:16.243121 | orchestrator | Wednesday 28 January 2026 01:12:34 +0000 (0:00:00.128) 0:05:17.324 ***** 2026-01-28 01:15:16.243126 | orchestrator | 2026-01-28 01:15:16.243133 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-01-28 01:15:16.243137 | orchestrator | Wednesday 28 January 2026 01:12:34 +0000 (0:00:00.127) 0:05:17.451 ***** 2026-01-28 01:15:16.243141 | orchestrator | 2026-01-28 01:15:16.243146 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-01-28 01:15:16.243150 | orchestrator | Wednesday 28 January 2026 01:12:34 +0000 (0:00:00.293) 0:05:17.745 ***** 2026-01-28 01:15:16.243154 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.243159 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:16.243163 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:16.243167 | orchestrator | 2026-01-28 01:15:16.243172 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-01-28 01:15:16.243176 | orchestrator | Wednesday 28 January 2026 01:12:41 +0000 (0:00:06.859) 0:05:24.604 ***** 2026-01-28 01:15:16.243180 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.243184 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:16.243189 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:16.243193 | orchestrator | 2026-01-28 01:15:16.243197 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-01-28 01:15:16.243202 | orchestrator | Wednesday 28 January 2026 01:12:52 +0000 (0:00:11.224) 0:05:35.829 ***** 2026-01-28 01:15:16.243206 | orchestrator | changed: [testbed-node-4] 2026-01-28 01:15:16.243213 | orchestrator | changed: [testbed-node-5] 2026-01-28 01:15:16.243220 | orchestrator | changed: [testbed-node-3] 2026-01-28 01:15:16.243226 | orchestrator | 2026-01-28 01:15:16.243233 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-01-28 01:15:16.243240 | orchestrator | Wednesday 28 January 2026 01:13:08 +0000 (0:00:15.343) 0:05:51.172 ***** 2026-01-28 01:15:16.243247 | orchestrator | changed: [testbed-node-5] 2026-01-28 01:15:16.243253 | orchestrator | changed: [testbed-node-4] 2026-01-28 01:15:16.243260 | orchestrator | changed: [testbed-node-3] 2026-01-28 01:15:16.243267 | orchestrator | 2026-01-28 01:15:16.243274 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-01-28 01:15:16.243280 | orchestrator | Wednesday 28 January 2026 01:13:38 +0000 (0:00:29.955) 0:06:21.128 ***** 2026-01-28 01:15:16.243285 | orchestrator | changed: [testbed-node-4] 2026-01-28 01:15:16.243293 | orchestrator | changed: [testbed-node-3] 2026-01-28 01:15:16.243297 | orchestrator | changed: [testbed-node-5] 2026-01-28 01:15:16.243302 | orchestrator | 2026-01-28 01:15:16.243306 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-01-28 01:15:16.243310 | orchestrator | Wednesday 28 January 2026 01:13:38 +0000 (0:00:00.688) 0:06:21.816 ***** 2026-01-28 01:15:16.243315 | orchestrator | changed: [testbed-node-3] 2026-01-28 01:15:16.243319 | orchestrator | changed: [testbed-node-4] 2026-01-28 01:15:16.243323 | orchestrator | changed: [testbed-node-5] 2026-01-28 01:15:16.243328 | orchestrator | 2026-01-28 01:15:16.243332 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-01-28 01:15:16.243336 | orchestrator | Wednesday 28 January 2026 01:13:39 +0000 (0:00:00.702) 0:06:22.519 ***** 2026-01-28 01:15:16.243341 | orchestrator | changed: [testbed-node-5] 2026-01-28 01:15:16.243345 | orchestrator | changed: [testbed-node-3] 2026-01-28 01:15:16.243349 | orchestrator | changed: [testbed-node-4] 2026-01-28 01:15:16.243354 | orchestrator | 2026-01-28 01:15:16.243358 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-01-28 01:15:16.243362 | orchestrator | Wednesday 28 January 2026 01:14:01 +0000 (0:00:21.912) 0:06:44.431 ***** 2026-01-28 01:15:16.243367 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.243371 | orchestrator | 2026-01-28 01:15:16.243375 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-01-28 01:15:16.243380 | orchestrator | Wednesday 28 January 2026 01:14:01 +0000 (0:00:00.132) 0:06:44.563 ***** 2026-01-28 01:15:16.243384 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.243388 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.243392 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.243400 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.243404 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.243409 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-01-28 01:15:16.243413 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-28 01:15:16.243418 | orchestrator | 2026-01-28 01:15:16.243422 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-01-28 01:15:16.243426 | orchestrator | Wednesday 28 January 2026 01:14:23 +0000 (0:00:22.025) 0:07:06.589 ***** 2026-01-28 01:15:16.243431 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.243435 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.243439 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.243443 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.243448 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.243452 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.243456 | orchestrator | 2026-01-28 01:15:16.243460 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-01-28 01:15:16.243465 | orchestrator | Wednesday 28 January 2026 01:14:32 +0000 (0:00:08.894) 0:07:15.484 ***** 2026-01-28 01:15:16.243469 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.243473 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.243478 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.243482 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.243486 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.243491 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-01-28 01:15:16.243495 | orchestrator | 2026-01-28 01:15:16.243502 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-01-28 01:15:16.243506 | orchestrator | Wednesday 28 January 2026 01:14:36 +0000 (0:00:03.781) 0:07:19.265 ***** 2026-01-28 01:15:16.243510 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-28 01:15:16.243515 | orchestrator | 2026-01-28 01:15:16.243519 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-01-28 01:15:16.243523 | orchestrator | Wednesday 28 January 2026 01:14:50 +0000 (0:00:13.946) 0:07:33.211 ***** 2026-01-28 01:15:16.243528 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-28 01:15:16.243532 | orchestrator | 2026-01-28 01:15:16.243536 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-01-28 01:15:16.243540 | orchestrator | Wednesday 28 January 2026 01:14:51 +0000 (0:00:01.283) 0:07:34.495 ***** 2026-01-28 01:15:16.243545 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.243549 | orchestrator | 2026-01-28 01:15:16.243553 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-01-28 01:15:16.243558 | orchestrator | Wednesday 28 January 2026 01:14:52 +0000 (0:00:01.304) 0:07:35.800 ***** 2026-01-28 01:15:16.243562 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-01-28 01:15:16.243566 | orchestrator | 2026-01-28 01:15:16.243571 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-01-28 01:15:16.243575 | orchestrator | Wednesday 28 January 2026 01:15:06 +0000 (0:00:13.554) 0:07:49.354 ***** 2026-01-28 01:15:16.243579 | orchestrator | ok: [testbed-node-3] 2026-01-28 01:15:16.243584 | orchestrator | ok: [testbed-node-4] 2026-01-28 01:15:16.243588 | orchestrator | ok: [testbed-node-5] 2026-01-28 01:15:16.243592 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:16.243597 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:15:16.243601 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:15:16.243605 | orchestrator | 2026-01-28 01:15:16.243609 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-01-28 01:15:16.243614 | orchestrator | 2026-01-28 01:15:16.243618 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-01-28 01:15:16.243622 | orchestrator | Wednesday 28 January 2026 01:15:08 +0000 (0:00:01.732) 0:07:51.087 ***** 2026-01-28 01:15:16.243630 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:16.243634 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:16.243638 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:16.243643 | orchestrator | 2026-01-28 01:15:16.243647 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-01-28 01:15:16.243651 | orchestrator | 2026-01-28 01:15:16.243656 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-01-28 01:15:16.243660 | orchestrator | Wednesday 28 January 2026 01:15:09 +0000 (0:00:01.018) 0:07:52.105 ***** 2026-01-28 01:15:16.243667 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.243672 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.243676 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.243680 | orchestrator | 2026-01-28 01:15:16.243685 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-01-28 01:15:16.243689 | orchestrator | 2026-01-28 01:15:16.243694 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-01-28 01:15:16.243698 | orchestrator | Wednesday 28 January 2026 01:15:09 +0000 (0:00:00.508) 0:07:52.613 ***** 2026-01-28 01:15:16.243702 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-01-28 01:15:16.243707 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-01-28 01:15:16.243711 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-01-28 01:15:16.243715 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-01-28 01:15:16.243720 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-01-28 01:15:16.243724 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-01-28 01:15:16.243728 | orchestrator | skipping: [testbed-node-3] 2026-01-28 01:15:16.243733 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-01-28 01:15:16.243737 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-01-28 01:15:16.243741 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-01-28 01:15:16.243745 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-01-28 01:15:16.243750 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-01-28 01:15:16.243754 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-01-28 01:15:16.243758 | orchestrator | skipping: [testbed-node-4] 2026-01-28 01:15:16.243763 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-01-28 01:15:16.243767 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-01-28 01:15:16.243771 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-01-28 01:15:16.243776 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-01-28 01:15:16.243780 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-01-28 01:15:16.243784 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-01-28 01:15:16.243789 | orchestrator | skipping: [testbed-node-5] 2026-01-28 01:15:16.243793 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-01-28 01:15:16.243797 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-01-28 01:15:16.243802 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-01-28 01:15:16.243806 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-01-28 01:15:16.243810 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-01-28 01:15:16.243815 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-01-28 01:15:16.243819 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.243823 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-01-28 01:15:16.243828 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-01-28 01:15:16.243832 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-01-28 01:15:16.243840 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-01-28 01:15:16.243847 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-01-28 01:15:16.243852 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-01-28 01:15:16.243856 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-01-28 01:15:16.243860 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.243865 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-01-28 01:15:16.243869 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-01-28 01:15:16.243873 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-01-28 01:15:16.243878 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-01-28 01:15:16.243882 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-01-28 01:15:16.243886 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.243891 | orchestrator | 2026-01-28 01:15:16.243895 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-01-28 01:15:16.243899 | orchestrator | 2026-01-28 01:15:16.243904 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-01-28 01:15:16.243908 | orchestrator | Wednesday 28 January 2026 01:15:11 +0000 (0:00:01.384) 0:07:53.998 ***** 2026-01-28 01:15:16.243913 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-01-28 01:15:16.243917 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-01-28 01:15:16.243921 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.243926 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-01-28 01:15:16.243930 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-01-28 01:15:16.243934 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.243939 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-01-28 01:15:16.243943 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-01-28 01:15:16.243947 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.243952 | orchestrator | 2026-01-28 01:15:16.243956 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-01-28 01:15:16.243974 | orchestrator | 2026-01-28 01:15:16.243979 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-01-28 01:15:16.243983 | orchestrator | Wednesday 28 January 2026 01:15:11 +0000 (0:00:00.836) 0:07:54.834 ***** 2026-01-28 01:15:16.243987 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.243992 | orchestrator | 2026-01-28 01:15:16.243996 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-01-28 01:15:16.244000 | orchestrator | 2026-01-28 01:15:16.244008 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-01-28 01:15:16.244012 | orchestrator | Wednesday 28 January 2026 01:15:12 +0000 (0:00:00.690) 0:07:55.525 ***** 2026-01-28 01:15:16.244016 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:16.244021 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:16.244025 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:16.244029 | orchestrator | 2026-01-28 01:15:16.244034 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:15:16.244038 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:15:16.244043 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-01-28 01:15:16.244048 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-28 01:15:16.244052 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-01-28 01:15:16.244057 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-01-28 01:15:16.244064 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-28 01:15:16.244068 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-01-28 01:15:16.244073 | orchestrator | 2026-01-28 01:15:16.244077 | orchestrator | 2026-01-28 01:15:16.244082 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:15:16.244086 | orchestrator | Wednesday 28 January 2026 01:15:13 +0000 (0:00:00.430) 0:07:55.955 ***** 2026-01-28 01:15:16.244090 | orchestrator | =============================================================================== 2026-01-28 01:15:16.244095 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 29.96s 2026-01-28 01:15:16.244099 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.85s 2026-01-28 01:15:16.244103 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.03s 2026-01-28 01:15:16.244108 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 21.91s 2026-01-28 01:15:16.244112 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.37s 2026-01-28 01:15:16.244116 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.30s 2026-01-28 01:15:16.244121 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.14s 2026-01-28 01:15:16.244131 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 15.34s 2026-01-28 01:15:16.244140 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.95s 2026-01-28 01:15:16.244147 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.55s 2026-01-28 01:15:16.244153 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.36s 2026-01-28 01:15:16.244160 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.01s 2026-01-28 01:15:16.244167 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.88s 2026-01-28 01:15:16.244174 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.40s 2026-01-28 01:15:16.244180 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.22s 2026-01-28 01:15:16.244186 | orchestrator | nova : Restart nova-api container --------------------------------------- 9.95s 2026-01-28 01:15:16.244193 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.39s 2026-01-28 01:15:16.244200 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.89s 2026-01-28 01:15:16.244206 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 6.93s 2026-01-28 01:15:16.244213 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 6.86s 2026-01-28 01:15:19.279466 | orchestrator | 2026-01-28 01:15:19 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:15:19.279570 | orchestrator | 2026-01-28 01:15:19 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:15:22.320194 | orchestrator | 2026-01-28 01:15:22 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:15:22.320269 | orchestrator | 2026-01-28 01:15:22 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:15:25.362453 | orchestrator | 2026-01-28 01:15:25 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:15:25.362526 | orchestrator | 2026-01-28 01:15:25 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:15:28.407427 | orchestrator | 2026-01-28 01:15:28 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:15:28.407531 | orchestrator | 2026-01-28 01:15:28 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:15:31.450193 | orchestrator | 2026-01-28 01:15:31 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:15:31.450325 | orchestrator | 2026-01-28 01:15:31 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:15:34.495554 | orchestrator | 2026-01-28 01:15:34 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state STARTED 2026-01-28 01:15:34.495676 | orchestrator | 2026-01-28 01:15:34 | INFO  | Wait 1 second(s) until the next check 2026-01-28 01:15:37.538316 | orchestrator | 2026-01-28 01:15:37 | INFO  | Task 05da3e69-0a4f-41cd-9f6d-b95bff8a8216 is in state SUCCESS 2026-01-28 01:15:37.540816 | orchestrator | 2026-01-28 01:15:37.540916 | orchestrator | 2026-01-28 01:15:37.540935 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-01-28 01:15:37.540952 | orchestrator | 2026-01-28 01:15:37.540967 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-01-28 01:15:37.540981 | orchestrator | Wednesday 28 January 2026 01:11:17 +0000 (0:00:00.261) 0:00:00.261 ***** 2026-01-28 01:15:37.541013 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:37.541025 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:15:37.541034 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:15:37.541043 | orchestrator | 2026-01-28 01:15:37.541056 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-01-28 01:15:37.541071 | orchestrator | Wednesday 28 January 2026 01:11:17 +0000 (0:00:00.378) 0:00:00.640 ***** 2026-01-28 01:15:37.541087 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-01-28 01:15:37.541098 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-01-28 01:15:37.541107 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-01-28 01:15:37.541116 | orchestrator | 2026-01-28 01:15:37.541124 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-01-28 01:15:37.541133 | orchestrator | 2026-01-28 01:15:37.541143 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-28 01:15:37.541486 | orchestrator | Wednesday 28 January 2026 01:11:18 +0000 (0:00:00.459) 0:00:01.099 ***** 2026-01-28 01:15:37.541509 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:15:37.541526 | orchestrator | 2026-01-28 01:15:37.541537 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-01-28 01:15:37.541546 | orchestrator | Wednesday 28 January 2026 01:11:18 +0000 (0:00:00.567) 0:00:01.667 ***** 2026-01-28 01:15:37.541556 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-01-28 01:15:37.541564 | orchestrator | 2026-01-28 01:15:37.541573 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-01-28 01:15:37.541582 | orchestrator | Wednesday 28 January 2026 01:11:22 +0000 (0:00:03.365) 0:00:05.032 ***** 2026-01-28 01:15:37.541591 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-01-28 01:15:37.541605 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-01-28 01:15:37.541619 | orchestrator | 2026-01-28 01:15:37.541652 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-01-28 01:15:37.541665 | orchestrator | Wednesday 28 January 2026 01:11:28 +0000 (0:00:06.761) 0:00:11.794 ***** 2026-01-28 01:15:37.541674 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-01-28 01:15:37.541683 | orchestrator | 2026-01-28 01:15:37.541692 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-01-28 01:15:37.541700 | orchestrator | Wednesday 28 January 2026 01:11:31 +0000 (0:00:02.981) 0:00:14.776 ***** 2026-01-28 01:15:37.541709 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-01-28 01:15:37.541718 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-28 01:15:37.541727 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-01-28 01:15:37.541759 | orchestrator | 2026-01-28 01:15:37.541768 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-01-28 01:15:37.541777 | orchestrator | Wednesday 28 January 2026 01:11:38 +0000 (0:00:06.667) 0:00:21.443 ***** 2026-01-28 01:15:37.541785 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-01-28 01:15:37.541794 | orchestrator | 2026-01-28 01:15:37.541803 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-01-28 01:15:37.541811 | orchestrator | Wednesday 28 January 2026 01:11:41 +0000 (0:00:03.105) 0:00:24.549 ***** 2026-01-28 01:15:37.541820 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-28 01:15:37.541828 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-01-28 01:15:37.541837 | orchestrator | 2026-01-28 01:15:37.541846 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-01-28 01:15:37.541855 | orchestrator | Wednesday 28 January 2026 01:11:47 +0000 (0:00:06.187) 0:00:30.736 ***** 2026-01-28 01:15:37.541864 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-01-28 01:15:37.541872 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-01-28 01:15:37.541881 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-01-28 01:15:37.541889 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-01-28 01:15:37.541898 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-01-28 01:15:37.541906 | orchestrator | 2026-01-28 01:15:37.541915 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-28 01:15:37.541923 | orchestrator | Wednesday 28 January 2026 01:12:01 +0000 (0:00:13.619) 0:00:44.356 ***** 2026-01-28 01:15:37.541932 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:15:37.541941 | orchestrator | 2026-01-28 01:15:37.541949 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-01-28 01:15:37.541958 | orchestrator | Wednesday 28 January 2026 01:12:02 +0000 (0:00:00.819) 0:00:45.176 ***** 2026-01-28 01:15:37.541966 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.541975 | orchestrator | 2026-01-28 01:15:37.541983 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-01-28 01:15:37.542070 | orchestrator | Wednesday 28 January 2026 01:12:07 +0000 (0:00:05.424) 0:00:50.600 ***** 2026-01-28 01:15:37.542088 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.542098 | orchestrator | 2026-01-28 01:15:37.542107 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-28 01:15:37.542152 | orchestrator | Wednesday 28 January 2026 01:12:11 +0000 (0:00:03.794) 0:00:54.395 ***** 2026-01-28 01:15:37.542162 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:37.542171 | orchestrator | 2026-01-28 01:15:37.542189 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-01-28 01:15:37.542198 | orchestrator | Wednesday 28 January 2026 01:12:14 +0000 (0:00:02.745) 0:00:57.141 ***** 2026-01-28 01:15:37.542207 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-28 01:15:37.542215 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-28 01:15:37.542224 | orchestrator | 2026-01-28 01:15:37.542252 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-01-28 01:15:37.542261 | orchestrator | Wednesday 28 January 2026 01:12:23 +0000 (0:00:08.955) 0:01:06.097 ***** 2026-01-28 01:15:37.542270 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-01-28 01:15:37.542279 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-01-28 01:15:37.542289 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-01-28 01:15:37.542308 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-01-28 01:15:37.542316 | orchestrator | 2026-01-28 01:15:37.542325 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-01-28 01:15:37.542334 | orchestrator | Wednesday 28 January 2026 01:12:37 +0000 (0:00:14.677) 0:01:20.774 ***** 2026-01-28 01:15:37.542342 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.542351 | orchestrator | 2026-01-28 01:15:37.542359 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-01-28 01:15:37.542368 | orchestrator | Wednesday 28 January 2026 01:12:42 +0000 (0:00:04.584) 0:01:25.359 ***** 2026-01-28 01:15:37.542376 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.542385 | orchestrator | 2026-01-28 01:15:37.542393 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-01-28 01:15:37.542402 | orchestrator | Wednesday 28 January 2026 01:12:47 +0000 (0:00:04.840) 0:01:30.200 ***** 2026-01-28 01:15:37.542410 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:37.542419 | orchestrator | 2026-01-28 01:15:37.542433 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-01-28 01:15:37.542442 | orchestrator | Wednesday 28 January 2026 01:12:47 +0000 (0:00:00.208) 0:01:30.408 ***** 2026-01-28 01:15:37.542450 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:37.542459 | orchestrator | 2026-01-28 01:15:37.542467 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-28 01:15:37.542476 | orchestrator | Wednesday 28 January 2026 01:12:51 +0000 (0:00:03.654) 0:01:34.063 ***** 2026-01-28 01:15:37.542484 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:15:37.542493 | orchestrator | 2026-01-28 01:15:37.542501 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-01-28 01:15:37.542510 | orchestrator | Wednesday 28 January 2026 01:12:52 +0000 (0:00:01.042) 0:01:35.106 ***** 2026-01-28 01:15:37.542518 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:37.542527 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:37.542535 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.542544 | orchestrator | 2026-01-28 01:15:37.542552 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-01-28 01:15:37.542561 | orchestrator | Wednesday 28 January 2026 01:12:57 +0000 (0:00:04.904) 0:01:40.011 ***** 2026-01-28 01:15:37.542569 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:37.542578 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:37.542586 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.542594 | orchestrator | 2026-01-28 01:15:37.542603 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-01-28 01:15:37.542612 | orchestrator | Wednesday 28 January 2026 01:13:01 +0000 (0:00:04.587) 0:01:44.599 ***** 2026-01-28 01:15:37.542620 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.542628 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:37.542637 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:37.542645 | orchestrator | 2026-01-28 01:15:37.542654 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-01-28 01:15:37.542662 | orchestrator | Wednesday 28 January 2026 01:13:02 +0000 (0:00:00.806) 0:01:45.405 ***** 2026-01-28 01:15:37.542671 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:15:37.542679 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:37.542688 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:15:37.542696 | orchestrator | 2026-01-28 01:15:37.542705 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-01-28 01:15:37.542713 | orchestrator | Wednesday 28 January 2026 01:13:04 +0000 (0:00:01.984) 0:01:47.390 ***** 2026-01-28 01:15:37.542722 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:37.542730 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:37.542739 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.542753 | orchestrator | 2026-01-28 01:15:37.542762 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-01-28 01:15:37.542770 | orchestrator | Wednesday 28 January 2026 01:13:05 +0000 (0:00:01.208) 0:01:48.599 ***** 2026-01-28 01:15:37.542779 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.542787 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:37.542796 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:37.542804 | orchestrator | 2026-01-28 01:15:37.542813 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-01-28 01:15:37.542821 | orchestrator | Wednesday 28 January 2026 01:13:06 +0000 (0:00:01.086) 0:01:49.686 ***** 2026-01-28 01:15:37.542830 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:37.542838 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.542847 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:37.542855 | orchestrator | 2026-01-28 01:15:37.542880 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-01-28 01:15:37.542889 | orchestrator | Wednesday 28 January 2026 01:13:08 +0000 (0:00:01.789) 0:01:51.475 ***** 2026-01-28 01:15:37.542897 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.542906 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:37.542914 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:37.542923 | orchestrator | 2026-01-28 01:15:37.542932 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-01-28 01:15:37.542940 | orchestrator | Wednesday 28 January 2026 01:13:10 +0000 (0:00:01.491) 0:01:52.967 ***** 2026-01-28 01:15:37.542949 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:37.542958 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:15:37.542966 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:15:37.542975 | orchestrator | 2026-01-28 01:15:37.542984 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-01-28 01:15:37.543036 | orchestrator | Wednesday 28 January 2026 01:13:10 +0000 (0:00:00.664) 0:01:53.631 ***** 2026-01-28 01:15:37.543046 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:15:37.543054 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:37.543063 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:15:37.543071 | orchestrator | 2026-01-28 01:15:37.543080 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-28 01:15:37.543110 | orchestrator | Wednesday 28 January 2026 01:13:13 +0000 (0:00:02.410) 0:01:56.042 ***** 2026-01-28 01:15:37.543119 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:15:37.543128 | orchestrator | 2026-01-28 01:15:37.543137 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-01-28 01:15:37.543145 | orchestrator | Wednesday 28 January 2026 01:13:13 +0000 (0:00:00.622) 0:01:56.665 ***** 2026-01-28 01:15:37.543154 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:37.543162 | orchestrator | 2026-01-28 01:15:37.543171 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-01-28 01:15:37.543179 | orchestrator | Wednesday 28 January 2026 01:13:16 +0000 (0:00:03.067) 0:01:59.732 ***** 2026-01-28 01:15:37.543189 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:37.543202 | orchestrator | 2026-01-28 01:15:37.543216 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-01-28 01:15:37.543230 | orchestrator | Wednesday 28 January 2026 01:13:19 +0000 (0:00:02.856) 0:02:02.589 ***** 2026-01-28 01:15:37.543244 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-01-28 01:15:37.543265 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-01-28 01:15:37.543280 | orchestrator | 2026-01-28 01:15:37.543295 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-01-28 01:15:37.543309 | orchestrator | Wednesday 28 January 2026 01:13:25 +0000 (0:00:05.879) 0:02:08.469 ***** 2026-01-28 01:15:37.543325 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:37.543339 | orchestrator | 2026-01-28 01:15:37.543353 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-01-28 01:15:37.543377 | orchestrator | Wednesday 28 January 2026 01:13:28 +0000 (0:00:03.254) 0:02:11.724 ***** 2026-01-28 01:15:37.543391 | orchestrator | ok: [testbed-node-0] 2026-01-28 01:15:37.543399 | orchestrator | ok: [testbed-node-1] 2026-01-28 01:15:37.543408 | orchestrator | ok: [testbed-node-2] 2026-01-28 01:15:37.543416 | orchestrator | 2026-01-28 01:15:37.543425 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-01-28 01:15:37.543434 | orchestrator | Wednesday 28 January 2026 01:13:29 +0000 (0:00:00.320) 0:02:12.044 ***** 2026-01-28 01:15:37.543445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:37.543466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:37.543479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:37.543495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-28 01:15:37.543519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-28 01:15:37.543543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-28 01:15:37.543560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.543577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.543602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.543618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.543640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.543657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.543666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:37.543675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:37.543696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:37.543712 | orchestrator | 2026-01-28 01:15:37.543727 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-01-28 01:15:37.543743 | orchestrator | Wednesday 28 January 2026 01:13:31 +0000 (0:00:02.237) 0:02:14.282 ***** 2026-01-28 01:15:37.543758 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:37.543768 | orchestrator | 2026-01-28 01:15:37.543777 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-01-28 01:15:37.543785 | orchestrator | Wednesday 28 January 2026 01:13:31 +0000 (0:00:00.162) 0:02:14.445 ***** 2026-01-28 01:15:37.543794 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:37.543803 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:37.543811 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:37.543820 | orchestrator | 2026-01-28 01:15:37.543828 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-01-28 01:15:37.543837 | orchestrator | Wednesday 28 January 2026 01:13:32 +0000 (0:00:00.489) 0:02:14.934 ***** 2026-01-28 01:15:37.543847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-28 01:15:37.543867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-28 01:15:37.543877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-28 01:15:37.543886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-28 01:15:37.543895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:15:37.543904 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:37.543920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-28 01:15:37.543935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-28 01:15:37.543953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-28 01:15:37.543963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-28 01:15:37.543972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:15:37.543981 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:37.544187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-28 01:15:37.544213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-28 01:15:37.544233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-28 01:15:37.544256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-28 01:15:37.544273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:15:37.544287 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:37.544296 | orchestrator | 2026-01-28 01:15:37.544305 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-28 01:15:37.544314 | orchestrator | Wednesday 28 January 2026 01:13:32 +0000 (0:00:00.708) 0:02:15.643 ***** 2026-01-28 01:15:37.544322 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-01-28 01:15:37.544330 | orchestrator | 2026-01-28 01:15:37.544338 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-01-28 01:15:37.544346 | orchestrator | Wednesday 28 January 2026 01:13:33 +0000 (0:00:00.558) 0:02:16.201 ***** 2026-01-28 01:15:37.544355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:37.544370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:37.544389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:37.544397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-28 01:15:37.544406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-28 01:15:37.544414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-28 01:15:37.544422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.544435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.544448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.544460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.544469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.544477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.544485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:37.544504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:37.544528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:37.544537 | orchestrator | 2026-01-28 01:15:37.544545 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-01-28 01:15:37.544553 | orchestrator | Wednesday 28 January 2026 01:13:37 +0000 (0:00:04.485) 0:02:20.687 ***** 2026-01-28 01:15:37.544565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-28 01:15:37.544573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-28 01:15:37.544582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-28 01:15:37.544590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-28 01:15:37.544609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:15:37.544617 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:37.544626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-28 01:15:37.544634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-28 01:15:37.544646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-28 01:15:37.544654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-28 01:15:37.544663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:15:37.544676 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:37.544690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-28 01:15:37.544698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-28 01:15:37.544707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-28 01:15:37.544719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-28 01:15:37.544727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:15:37.544735 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:37.544743 | orchestrator | 2026-01-28 01:15:37.544752 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-01-28 01:15:37.544760 | orchestrator | Wednesday 28 January 2026 01:13:38 +0000 (0:00:00.838) 0:02:21.525 ***** 2026-01-28 01:15:37.544768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-28 01:15:37.544788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-28 01:15:37.544797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-28 01:15:37.544805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-28 01:15:37.544817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:15:37.544825 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:37.544833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-28 01:15:37.544851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-28 01:15:37.544865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-28 01:15:37.544873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-28 01:15:37.544882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:15:37.544890 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:37.544902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-01-28 01:15:37.544910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-01-28 01:15:37.544923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-01-28 01:15:37.544938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-01-28 01:15:37.544947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-01-28 01:15:37.544955 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:37.544963 | orchestrator | 2026-01-28 01:15:37.544971 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-01-28 01:15:37.544979 | orchestrator | Wednesday 28 January 2026 01:13:39 +0000 (0:00:00.789) 0:02:22.315 ***** 2026-01-28 01:15:37.545037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:37.545055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:37.545076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:37.545090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-28 01:15:37.545099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-28 01:15:37.545107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-28 01:15:37.545119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.545127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.545141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.545150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.545165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.545174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.545183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:37.545196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:37.545211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:37.545220 | orchestrator | 2026-01-28 01:15:37.545229 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-01-28 01:15:37.545238 | orchestrator | Wednesday 28 January 2026 01:13:43 +0000 (0:00:04.478) 0:02:26.793 ***** 2026-01-28 01:15:37.545247 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-28 01:15:37.545256 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-28 01:15:37.545265 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-01-28 01:15:37.545274 | orchestrator | 2026-01-28 01:15:37.545282 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-01-28 01:15:37.545291 | orchestrator | Wednesday 28 January 2026 01:13:45 +0000 (0:00:01.797) 0:02:28.590 ***** 2026-01-28 01:15:37.545377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:37.545389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:37.545402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:37.545417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-28 01:15:37.545427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-28 01:15:37.545436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-28 01:15:37.545449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.545458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.545468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.545480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.545495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.545504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.545513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:37.545528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:37.545538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:37.545547 | orchestrator | 2026-01-28 01:15:37.545556 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-01-28 01:15:37.545565 | orchestrator | Wednesday 28 January 2026 01:14:01 +0000 (0:00:15.897) 0:02:44.488 ***** 2026-01-28 01:15:37.545574 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:37.545589 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.545598 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:37.545606 | orchestrator | 2026-01-28 01:15:37.545615 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-01-28 01:15:37.545624 | orchestrator | Wednesday 28 January 2026 01:14:03 +0000 (0:00:01.640) 0:02:46.128 ***** 2026-01-28 01:15:37.545632 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-28 01:15:37.545645 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-28 01:15:37.545654 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-28 01:15:37.545662 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-28 01:15:37.545671 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-28 01:15:37.545679 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-28 01:15:37.545688 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-28 01:15:37.545697 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-28 01:15:37.545705 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-28 01:15:37.545714 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-28 01:15:37.545722 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-28 01:15:37.545731 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-28 01:15:37.545739 | orchestrator | 2026-01-28 01:15:37.545748 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-01-28 01:15:37.545757 | orchestrator | Wednesday 28 January 2026 01:14:09 +0000 (0:00:06.573) 0:02:52.702 ***** 2026-01-28 01:15:37.545765 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-28 01:15:37.545774 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-28 01:15:37.545783 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-28 01:15:37.545792 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-28 01:15:37.545800 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-28 01:15:37.545809 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-28 01:15:37.545817 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-28 01:15:37.545826 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-28 01:15:37.545834 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-28 01:15:37.545843 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-28 01:15:37.545851 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-28 01:15:37.545860 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-28 01:15:37.545868 | orchestrator | 2026-01-28 01:15:37.545877 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-01-28 01:15:37.545885 | orchestrator | Wednesday 28 January 2026 01:14:14 +0000 (0:00:05.118) 0:02:57.820 ***** 2026-01-28 01:15:37.545894 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-01-28 01:15:37.545902 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-01-28 01:15:37.545911 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-01-28 01:15:37.545919 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-01-28 01:15:37.545928 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-01-28 01:15:37.545937 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-01-28 01:15:37.545945 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-01-28 01:15:37.545954 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-01-28 01:15:37.545967 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-01-28 01:15:37.546010 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-01-28 01:15:37.546064 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-01-28 01:15:37.546073 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-01-28 01:15:37.546082 | orchestrator | 2026-01-28 01:15:37.546090 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-01-28 01:15:37.546099 | orchestrator | Wednesday 28 January 2026 01:14:20 +0000 (0:00:05.253) 0:03:03.074 ***** 2026-01-28 01:15:37.546109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:37.546123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:37.546133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-01-28 01:15:37.546143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-28 01:15:37.546158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-28 01:15:37.546175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-01-28 01:15:37.546184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.546197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.546206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.546215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.546224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.546244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-01-28 01:15:37.546253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:37.546262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:37.546276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-01-28 01:15:37.546285 | orchestrator | 2026-01-28 01:15:37.546293 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-01-28 01:15:37.546302 | orchestrator | Wednesday 28 January 2026 01:14:23 +0000 (0:00:03.490) 0:03:06.564 ***** 2026-01-28 01:15:37.546311 | orchestrator | skipping: [testbed-node-0] 2026-01-28 01:15:37.546320 | orchestrator | skipping: [testbed-node-1] 2026-01-28 01:15:37.546329 | orchestrator | skipping: [testbed-node-2] 2026-01-28 01:15:37.546337 | orchestrator | 2026-01-28 01:15:37.546346 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-01-28 01:15:37.546355 | orchestrator | Wednesday 28 January 2026 01:14:24 +0000 (0:00:00.301) 0:03:06.866 ***** 2026-01-28 01:15:37.546363 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.546372 | orchestrator | 2026-01-28 01:15:37.546380 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-01-28 01:15:37.546389 | orchestrator | Wednesday 28 January 2026 01:14:26 +0000 (0:00:02.341) 0:03:09.207 ***** 2026-01-28 01:15:37.546398 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.546406 | orchestrator | 2026-01-28 01:15:37.546415 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-01-28 01:15:37.546424 | orchestrator | Wednesday 28 January 2026 01:14:28 +0000 (0:00:02.188) 0:03:11.396 ***** 2026-01-28 01:15:37.546438 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.546447 | orchestrator | 2026-01-28 01:15:37.546456 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-01-28 01:15:37.546464 | orchestrator | Wednesday 28 January 2026 01:14:30 +0000 (0:00:02.244) 0:03:13.640 ***** 2026-01-28 01:15:37.546473 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.546482 | orchestrator | 2026-01-28 01:15:37.546490 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-01-28 01:15:37.546499 | orchestrator | Wednesday 28 January 2026 01:14:34 +0000 (0:00:03.243) 0:03:16.883 ***** 2026-01-28 01:15:37.546507 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.546516 | orchestrator | 2026-01-28 01:15:37.546525 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-28 01:15:37.546533 | orchestrator | Wednesday 28 January 2026 01:14:54 +0000 (0:00:20.307) 0:03:37.191 ***** 2026-01-28 01:15:37.546542 | orchestrator | 2026-01-28 01:15:37.546550 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-28 01:15:37.546559 | orchestrator | Wednesday 28 January 2026 01:14:54 +0000 (0:00:00.065) 0:03:37.256 ***** 2026-01-28 01:15:37.546568 | orchestrator | 2026-01-28 01:15:37.546576 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-01-28 01:15:37.546585 | orchestrator | Wednesday 28 January 2026 01:14:54 +0000 (0:00:00.071) 0:03:37.328 ***** 2026-01-28 01:15:37.546594 | orchestrator | 2026-01-28 01:15:37.546602 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-01-28 01:15:37.546615 | orchestrator | Wednesday 28 January 2026 01:14:54 +0000 (0:00:00.069) 0:03:37.398 ***** 2026-01-28 01:15:37.546624 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.546633 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:37.546641 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:37.546650 | orchestrator | 2026-01-28 01:15:37.546659 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-01-28 01:15:37.546667 | orchestrator | Wednesday 28 January 2026 01:15:04 +0000 (0:00:10.339) 0:03:47.738 ***** 2026-01-28 01:15:37.546676 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.546685 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:37.546693 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:37.546702 | orchestrator | 2026-01-28 01:15:37.546710 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-01-28 01:15:37.546719 | orchestrator | Wednesday 28 January 2026 01:15:10 +0000 (0:00:05.132) 0:03:52.871 ***** 2026-01-28 01:15:37.546728 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.546736 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:37.546745 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:37.546753 | orchestrator | 2026-01-28 01:15:37.546762 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-01-28 01:15:37.546771 | orchestrator | Wednesday 28 January 2026 01:15:20 +0000 (0:00:10.179) 0:04:03.050 ***** 2026-01-28 01:15:37.546779 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:37.546788 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:37.546797 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.546805 | orchestrator | 2026-01-28 01:15:37.546814 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-01-28 01:15:37.546822 | orchestrator | Wednesday 28 January 2026 01:15:28 +0000 (0:00:07.938) 0:04:10.989 ***** 2026-01-28 01:15:37.546831 | orchestrator | changed: [testbed-node-1] 2026-01-28 01:15:37.546841 | orchestrator | changed: [testbed-node-2] 2026-01-28 01:15:37.546855 | orchestrator | changed: [testbed-node-0] 2026-01-28 01:15:37.546868 | orchestrator | 2026-01-28 01:15:37.546881 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:15:37.546893 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-01-28 01:15:37.546906 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-28 01:15:37.546931 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-01-28 01:15:37.546945 | orchestrator | 2026-01-28 01:15:37.546958 | orchestrator | 2026-01-28 01:15:37.546970 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:15:37.546983 | orchestrator | Wednesday 28 January 2026 01:15:36 +0000 (0:00:08.370) 0:04:19.360 ***** 2026-01-28 01:15:37.547018 | orchestrator | =============================================================================== 2026-01-28 01:15:37.547032 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.31s 2026-01-28 01:15:37.547046 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.90s 2026-01-28 01:15:37.547062 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.68s 2026-01-28 01:15:37.547076 | orchestrator | octavia : Adding octavia related roles --------------------------------- 13.62s 2026-01-28 01:15:37.547091 | orchestrator | octavia : Restart octavia-api container -------------------------------- 10.34s 2026-01-28 01:15:37.547105 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.18s 2026-01-28 01:15:37.547119 | orchestrator | octavia : Create security groups for octavia ---------------------------- 8.96s 2026-01-28 01:15:37.547133 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 8.37s 2026-01-28 01:15:37.547147 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 7.94s 2026-01-28 01:15:37.547161 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.76s 2026-01-28 01:15:37.547175 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 6.67s 2026-01-28 01:15:37.547189 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 6.57s 2026-01-28 01:15:37.547204 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.19s 2026-01-28 01:15:37.547219 | orchestrator | octavia : Get security groups for octavia ------------------------------- 5.88s 2026-01-28 01:15:37.547234 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.42s 2026-01-28 01:15:37.547249 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.25s 2026-01-28 01:15:37.547264 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 5.13s 2026-01-28 01:15:37.547279 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.12s 2026-01-28 01:15:37.547293 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 4.90s 2026-01-28 01:15:37.547308 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 4.84s 2026-01-28 01:15:37.547323 | orchestrator | 2026-01-28 01:15:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:15:40.576550 | orchestrator | 2026-01-28 01:15:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:15:43.620219 | orchestrator | 2026-01-28 01:15:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:15:46.661943 | orchestrator | 2026-01-28 01:15:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:15:49.704529 | orchestrator | 2026-01-28 01:15:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:15:52.747290 | orchestrator | 2026-01-28 01:15:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:15:55.786718 | orchestrator | 2026-01-28 01:15:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:15:58.823150 | orchestrator | 2026-01-28 01:15:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:16:01.863811 | orchestrator | 2026-01-28 01:16:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:16:04.914308 | orchestrator | 2026-01-28 01:16:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:16:07.958243 | orchestrator | 2026-01-28 01:16:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:16:11.001808 | orchestrator | 2026-01-28 01:16:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:16:14.042652 | orchestrator | 2026-01-28 01:16:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:16:17.076531 | orchestrator | 2026-01-28 01:16:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:16:20.118968 | orchestrator | 2026-01-28 01:16:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:16:23.159560 | orchestrator | 2026-01-28 01:16:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:16:26.203135 | orchestrator | 2026-01-28 01:16:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:16:29.250640 | orchestrator | 2026-01-28 01:16:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:16:32.294395 | orchestrator | 2026-01-28 01:16:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:16:35.335691 | orchestrator | 2026-01-28 01:16:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-01-28 01:16:38.377648 | orchestrator | 2026-01-28 01:16:38.711981 | orchestrator | 2026-01-28 01:16:38.718162 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed Jan 28 01:16:38 UTC 2026 2026-01-28 01:16:38.718233 | orchestrator | 2026-01-28 01:16:39.215926 | orchestrator | ok: Runtime: 0:33:56.143192 2026-01-28 01:16:39.465136 | 2026-01-28 01:16:39.465281 | TASK [Bootstrap services] 2026-01-28 01:16:40.241500 | orchestrator | 2026-01-28 01:16:40.241694 | orchestrator | # BOOTSTRAP 2026-01-28 01:16:40.241721 | orchestrator | 2026-01-28 01:16:40.241735 | orchestrator | + set -e 2026-01-28 01:16:40.241749 | orchestrator | + echo 2026-01-28 01:16:40.241763 | orchestrator | + echo '# BOOTSTRAP' 2026-01-28 01:16:40.241783 | orchestrator | + echo 2026-01-28 01:16:40.241828 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-01-28 01:16:40.251600 | orchestrator | + set -e 2026-01-28 01:16:40.251664 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-01-28 01:16:44.772647 | orchestrator | 2026-01-28 01:16:44 | INFO  | It takes a moment until task d4f1da63-002d-4dac-b1ce-2b07dbdebbd6 (flavor-manager) has been started and output is visible here. 2026-01-28 01:16:51.338798 | orchestrator | 2026-01-28 01:16:47 | INFO  | Flavor SCS-1L-1 created 2026-01-28 01:16:51.338936 | orchestrator | 2026-01-28 01:16:47 | INFO  | Flavor SCS-1L-1-5 created 2026-01-28 01:16:51.338955 | orchestrator | 2026-01-28 01:16:47 | INFO  | Flavor SCS-1V-2 created 2026-01-28 01:16:51.338967 | orchestrator | 2026-01-28 01:16:47 | INFO  | Flavor SCS-1V-2-5 created 2026-01-28 01:16:51.338979 | orchestrator | 2026-01-28 01:16:48 | INFO  | Flavor SCS-1V-4 created 2026-01-28 01:16:51.338990 | orchestrator | 2026-01-28 01:16:48 | INFO  | Flavor SCS-1V-4-10 created 2026-01-28 01:16:51.339002 | orchestrator | 2026-01-28 01:16:48 | INFO  | Flavor SCS-1V-8 created 2026-01-28 01:16:51.339014 | orchestrator | 2026-01-28 01:16:48 | INFO  | Flavor SCS-1V-8-20 created 2026-01-28 01:16:51.339037 | orchestrator | 2026-01-28 01:16:48 | INFO  | Flavor SCS-2V-4 created 2026-01-28 01:16:51.339048 | orchestrator | 2026-01-28 01:16:48 | INFO  | Flavor SCS-2V-4-10 created 2026-01-28 01:16:51.339060 | orchestrator | 2026-01-28 01:16:48 | INFO  | Flavor SCS-2V-8 created 2026-01-28 01:16:51.339071 | orchestrator | 2026-01-28 01:16:48 | INFO  | Flavor SCS-2V-8-20 created 2026-01-28 01:16:51.339113 | orchestrator | 2026-01-28 01:16:49 | INFO  | Flavor SCS-2V-16 created 2026-01-28 01:16:51.339125 | orchestrator | 2026-01-28 01:16:49 | INFO  | Flavor SCS-2V-16-50 created 2026-01-28 01:16:51.339136 | orchestrator | 2026-01-28 01:16:49 | INFO  | Flavor SCS-4V-8 created 2026-01-28 01:16:51.339147 | orchestrator | 2026-01-28 01:16:49 | INFO  | Flavor SCS-4V-8-20 created 2026-01-28 01:16:51.339157 | orchestrator | 2026-01-28 01:16:49 | INFO  | Flavor SCS-4V-16 created 2026-01-28 01:16:51.339168 | orchestrator | 2026-01-28 01:16:49 | INFO  | Flavor SCS-4V-16-50 created 2026-01-28 01:16:51.339180 | orchestrator | 2026-01-28 01:16:49 | INFO  | Flavor SCS-4V-32 created 2026-01-28 01:16:51.339190 | orchestrator | 2026-01-28 01:16:50 | INFO  | Flavor SCS-4V-32-100 created 2026-01-28 01:16:51.339201 | orchestrator | 2026-01-28 01:16:50 | INFO  | Flavor SCS-8V-16 created 2026-01-28 01:16:51.339212 | orchestrator | 2026-01-28 01:16:50 | INFO  | Flavor SCS-8V-16-50 created 2026-01-28 01:16:51.339224 | orchestrator | 2026-01-28 01:16:50 | INFO  | Flavor SCS-8V-32 created 2026-01-28 01:16:51.339235 | orchestrator | 2026-01-28 01:16:50 | INFO  | Flavor SCS-8V-32-100 created 2026-01-28 01:16:51.339246 | orchestrator | 2026-01-28 01:16:50 | INFO  | Flavor SCS-16V-32 created 2026-01-28 01:16:51.339257 | orchestrator | 2026-01-28 01:16:50 | INFO  | Flavor SCS-16V-32-100 created 2026-01-28 01:16:51.339268 | orchestrator | 2026-01-28 01:16:50 | INFO  | Flavor SCS-2V-4-20s created 2026-01-28 01:16:51.339279 | orchestrator | 2026-01-28 01:16:50 | INFO  | Flavor SCS-4V-8-50s created 2026-01-28 01:16:51.339290 | orchestrator | 2026-01-28 01:16:51 | INFO  | Flavor SCS-8V-32-100s created 2026-01-28 01:16:53.618448 | orchestrator | 2026-01-28 01:16:53 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-01-28 01:17:03.709517 | orchestrator | 2026-01-28 01:17:03 | INFO  | Task b2ecc0f3-36ee-414f-9c8d-de5f5aba2d8e (bootstrap-basic) was prepared for execution. 2026-01-28 01:17:03.709638 | orchestrator | 2026-01-28 01:17:03 | INFO  | It takes a moment until task b2ecc0f3-36ee-414f-9c8d-de5f5aba2d8e (bootstrap-basic) has been started and output is visible here. 2026-01-28 01:17:49.404129 | orchestrator | 2026-01-28 01:17:49.404304 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-01-28 01:17:49.405170 | orchestrator | 2026-01-28 01:17:49.405275 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-01-28 01:17:49.405292 | orchestrator | Wednesday 28 January 2026 01:17:07 +0000 (0:00:00.067) 0:00:00.067 ***** 2026-01-28 01:17:49.405305 | orchestrator | ok: [localhost] 2026-01-28 01:17:49.405318 | orchestrator | 2026-01-28 01:17:49.405329 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-01-28 01:17:49.405341 | orchestrator | Wednesday 28 January 2026 01:17:10 +0000 (0:00:02.871) 0:00:02.939 ***** 2026-01-28 01:17:49.405352 | orchestrator | ok: [localhost] 2026-01-28 01:17:49.405363 | orchestrator | 2026-01-28 01:17:49.405377 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-01-28 01:17:49.405388 | orchestrator | Wednesday 28 January 2026 01:17:18 +0000 (0:00:08.066) 0:00:11.006 ***** 2026-01-28 01:17:49.405399 | orchestrator | changed: [localhost] 2026-01-28 01:17:49.405411 | orchestrator | 2026-01-28 01:17:49.405422 | orchestrator | TASK [Create public network] *************************************************** 2026-01-28 01:17:49.405433 | orchestrator | Wednesday 28 January 2026 01:17:26 +0000 (0:00:07.760) 0:00:18.766 ***** 2026-01-28 01:17:49.405444 | orchestrator | changed: [localhost] 2026-01-28 01:17:49.405455 | orchestrator | 2026-01-28 01:17:49.405466 | orchestrator | TASK [Set public network to default] ******************************************* 2026-01-28 01:17:49.405484 | orchestrator | Wednesday 28 January 2026 01:17:31 +0000 (0:00:04.675) 0:00:23.442 ***** 2026-01-28 01:17:49.405509 | orchestrator | changed: [localhost] 2026-01-28 01:17:49.405528 | orchestrator | 2026-01-28 01:17:49.405546 | orchestrator | TASK [Create public subnet] **************************************************** 2026-01-28 01:17:49.405567 | orchestrator | Wednesday 28 January 2026 01:17:37 +0000 (0:00:06.505) 0:00:29.947 ***** 2026-01-28 01:17:49.405585 | orchestrator | changed: [localhost] 2026-01-28 01:17:49.405605 | orchestrator | 2026-01-28 01:17:49.405617 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-01-28 01:17:49.405628 | orchestrator | Wednesday 28 January 2026 01:17:41 +0000 (0:00:04.095) 0:00:34.042 ***** 2026-01-28 01:17:49.405639 | orchestrator | changed: [localhost] 2026-01-28 01:17:49.405650 | orchestrator | 2026-01-28 01:17:49.405661 | orchestrator | TASK [Create manager role] ***************************************************** 2026-01-28 01:17:49.405689 | orchestrator | Wednesday 28 January 2026 01:17:45 +0000 (0:00:03.847) 0:00:37.890 ***** 2026-01-28 01:17:49.405701 | orchestrator | ok: [localhost] 2026-01-28 01:17:49.405712 | orchestrator | 2026-01-28 01:17:49.405723 | orchestrator | PLAY RECAP ********************************************************************* 2026-01-28 01:17:49.405734 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-01-28 01:17:49.405747 | orchestrator | 2026-01-28 01:17:49.405758 | orchestrator | 2026-01-28 01:17:49.405769 | orchestrator | TASKS RECAP ******************************************************************** 2026-01-28 01:17:49.405780 | orchestrator | Wednesday 28 January 2026 01:17:49 +0000 (0:00:03.450) 0:00:41.340 ***** 2026-01-28 01:17:49.405791 | orchestrator | =============================================================================== 2026-01-28 01:17:49.405802 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.07s 2026-01-28 01:17:49.405813 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.76s 2026-01-28 01:17:49.405824 | orchestrator | Set public network to default ------------------------------------------- 6.51s 2026-01-28 01:17:49.405835 | orchestrator | Create public network --------------------------------------------------- 4.68s 2026-01-28 01:17:49.405871 | orchestrator | Create public subnet ---------------------------------------------------- 4.09s 2026-01-28 01:17:49.405890 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.85s 2026-01-28 01:17:49.405908 | orchestrator | Create manager role ----------------------------------------------------- 3.45s 2026-01-28 01:17:49.405928 | orchestrator | Gathering Facts --------------------------------------------------------- 2.87s 2026-01-28 01:17:51.862772 | orchestrator | 2026-01-28 01:17:51 | INFO  | It takes a moment until task e451c17f-decd-4465-b49a-7ca4b8725349 (image-manager) has been started and output is visible here. 2026-01-28 01:18:30.635617 | orchestrator | 2026-01-28 01:17:54 | INFO  | Processing image 'Cirros 0.6.2' 2026-01-28 01:18:30.636712 | orchestrator | 2026-01-28 01:17:54 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-01-28 01:18:30.636762 | orchestrator | 2026-01-28 01:17:54 | INFO  | Importing image Cirros 0.6.2 2026-01-28 01:18:30.636782 | orchestrator | 2026-01-28 01:17:54 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-28 01:18:30.636795 | orchestrator | 2026-01-28 01:17:56 | INFO  | Waiting for image to leave queued state... 2026-01-28 01:18:30.636808 | orchestrator | 2026-01-28 01:17:58 | INFO  | Waiting for import to complete... 2026-01-28 01:18:30.636820 | orchestrator | 2026-01-28 01:18:08 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-01-28 01:18:30.636833 | orchestrator | 2026-01-28 01:18:08 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-01-28 01:18:30.636844 | orchestrator | 2026-01-28 01:18:08 | INFO  | Setting internal_version = 0.6.2 2026-01-28 01:18:30.636856 | orchestrator | 2026-01-28 01:18:08 | INFO  | Setting image_original_user = cirros 2026-01-28 01:18:30.636868 | orchestrator | 2026-01-28 01:18:08 | INFO  | Adding tag os:cirros 2026-01-28 01:18:30.636879 | orchestrator | 2026-01-28 01:18:09 | INFO  | Setting property architecture: x86_64 2026-01-28 01:18:30.636891 | orchestrator | 2026-01-28 01:18:09 | INFO  | Setting property hw_disk_bus: scsi 2026-01-28 01:18:30.636902 | orchestrator | 2026-01-28 01:18:09 | INFO  | Setting property hw_rng_model: virtio 2026-01-28 01:18:30.636913 | orchestrator | 2026-01-28 01:18:09 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-28 01:18:30.636924 | orchestrator | 2026-01-28 01:18:09 | INFO  | Setting property hw_watchdog_action: reset 2026-01-28 01:18:30.636935 | orchestrator | 2026-01-28 01:18:09 | INFO  | Setting property hypervisor_type: qemu 2026-01-28 01:18:30.636947 | orchestrator | 2026-01-28 01:18:10 | INFO  | Setting property os_distro: cirros 2026-01-28 01:18:30.636958 | orchestrator | 2026-01-28 01:18:10 | INFO  | Setting property os_purpose: minimal 2026-01-28 01:18:30.636969 | orchestrator | 2026-01-28 01:18:10 | INFO  | Setting property replace_frequency: never 2026-01-28 01:18:30.636980 | orchestrator | 2026-01-28 01:18:10 | INFO  | Setting property uuid_validity: none 2026-01-28 01:18:30.636991 | orchestrator | 2026-01-28 01:18:10 | INFO  | Setting property provided_until: none 2026-01-28 01:18:30.637001 | orchestrator | 2026-01-28 01:18:10 | INFO  | Setting property image_description: Cirros 2026-01-28 01:18:30.637013 | orchestrator | 2026-01-28 01:18:11 | INFO  | Setting property image_name: Cirros 2026-01-28 01:18:30.637024 | orchestrator | 2026-01-28 01:18:11 | INFO  | Setting property internal_version: 0.6.2 2026-01-28 01:18:30.637034 | orchestrator | 2026-01-28 01:18:11 | INFO  | Setting property image_original_user: cirros 2026-01-28 01:18:30.637081 | orchestrator | 2026-01-28 01:18:11 | INFO  | Setting property os_version: 0.6.2 2026-01-28 01:18:30.637105 | orchestrator | 2026-01-28 01:18:11 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-01-28 01:18:30.637118 | orchestrator | 2026-01-28 01:18:11 | INFO  | Setting property image_build_date: 2023-05-30 2026-01-28 01:18:30.637129 | orchestrator | 2026-01-28 01:18:11 | INFO  | Checking status of 'Cirros 0.6.2' 2026-01-28 01:18:30.637140 | orchestrator | 2026-01-28 01:18:11 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-01-28 01:18:30.637151 | orchestrator | 2026-01-28 01:18:11 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-01-28 01:18:30.637162 | orchestrator | 2026-01-28 01:18:12 | INFO  | Processing image 'Cirros 0.6.3' 2026-01-28 01:18:30.637211 | orchestrator | 2026-01-28 01:18:12 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-01-28 01:18:30.637233 | orchestrator | 2026-01-28 01:18:12 | INFO  | Importing image Cirros 0.6.3 2026-01-28 01:18:30.637254 | orchestrator | 2026-01-28 01:18:12 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-28 01:18:30.637273 | orchestrator | 2026-01-28 01:18:13 | INFO  | Waiting for image to leave queued state... 2026-01-28 01:18:30.637290 | orchestrator | 2026-01-28 01:18:15 | INFO  | Waiting for import to complete... 2026-01-28 01:18:30.637328 | orchestrator | 2026-01-28 01:18:26 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-01-28 01:18:30.637348 | orchestrator | 2026-01-28 01:18:26 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-01-28 01:18:30.637367 | orchestrator | 2026-01-28 01:18:26 | INFO  | Setting internal_version = 0.6.3 2026-01-28 01:18:30.637385 | orchestrator | 2026-01-28 01:18:26 | INFO  | Setting image_original_user = cirros 2026-01-28 01:18:30.637403 | orchestrator | 2026-01-28 01:18:26 | INFO  | Adding tag os:cirros 2026-01-28 01:18:30.637422 | orchestrator | 2026-01-28 01:18:26 | INFO  | Setting property architecture: x86_64 2026-01-28 01:18:30.637441 | orchestrator | 2026-01-28 01:18:26 | INFO  | Setting property hw_disk_bus: scsi 2026-01-28 01:18:30.637459 | orchestrator | 2026-01-28 01:18:27 | INFO  | Setting property hw_rng_model: virtio 2026-01-28 01:18:30.637477 | orchestrator | 2026-01-28 01:18:27 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-01-28 01:18:30.637490 | orchestrator | 2026-01-28 01:18:27 | INFO  | Setting property hw_watchdog_action: reset 2026-01-28 01:18:30.637501 | orchestrator | 2026-01-28 01:18:27 | INFO  | Setting property hypervisor_type: qemu 2026-01-28 01:18:30.637512 | orchestrator | 2026-01-28 01:18:27 | INFO  | Setting property os_distro: cirros 2026-01-28 01:18:30.637523 | orchestrator | 2026-01-28 01:18:28 | INFO  | Setting property os_purpose: minimal 2026-01-28 01:18:30.637534 | orchestrator | 2026-01-28 01:18:28 | INFO  | Setting property replace_frequency: never 2026-01-28 01:18:30.637546 | orchestrator | 2026-01-28 01:18:28 | INFO  | Setting property uuid_validity: none 2026-01-28 01:18:30.637557 | orchestrator | 2026-01-28 01:18:28 | INFO  | Setting property provided_until: none 2026-01-28 01:18:30.637567 | orchestrator | 2026-01-28 01:18:28 | INFO  | Setting property image_description: Cirros 2026-01-28 01:18:30.637578 | orchestrator | 2026-01-28 01:18:28 | INFO  | Setting property image_name: Cirros 2026-01-28 01:18:30.637590 | orchestrator | 2026-01-28 01:18:29 | INFO  | Setting property internal_version: 0.6.3 2026-01-28 01:18:30.637640 | orchestrator | 2026-01-28 01:18:29 | INFO  | Setting property image_original_user: cirros 2026-01-28 01:18:30.637676 | orchestrator | 2026-01-28 01:18:29 | INFO  | Setting property os_version: 0.6.3 2026-01-28 01:18:30.637695 | orchestrator | 2026-01-28 01:18:29 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-01-28 01:18:30.637714 | orchestrator | 2026-01-28 01:18:29 | INFO  | Setting property image_build_date: 2024-09-26 2026-01-28 01:18:30.637725 | orchestrator | 2026-01-28 01:18:29 | INFO  | Checking status of 'Cirros 0.6.3' 2026-01-28 01:18:30.637736 | orchestrator | 2026-01-28 01:18:29 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-01-28 01:18:30.637747 | orchestrator | 2026-01-28 01:18:29 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-01-28 01:18:30.919168 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-01-28 01:18:33.098896 | orchestrator | 2026-01-28 01:18:33 | INFO  | date: 2026-01-27 2026-01-28 01:18:33.098977 | orchestrator | 2026-01-28 01:18:33 | INFO  | image: octavia-amphora-haproxy-2024.2.20260127.qcow2 2026-01-28 01:18:33.099169 | orchestrator | 2026-01-28 01:18:33 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260127.qcow2 2026-01-28 01:18:33.099216 | orchestrator | 2026-01-28 01:18:33 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260127.qcow2.CHECKSUM 2026-01-28 01:18:33.176837 | orchestrator | 2026-01-28 01:18:33 | INFO  | checksum: 79a25d551462746d00588b0b707b0ff8c99156f6afad74c820452a9cdf61e938 2026-01-28 01:18:33.245381 | orchestrator | 2026-01-28 01:18:33 | INFO  | It takes a moment until task eaf07a6b-affa-4e2d-8f45-f51d6ab6587f (image-manager) has been started and output is visible here. 2026-01-28 01:19:58.947341 | orchestrator | 2026-01-28 01:18:35 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-01-27' 2026-01-28 01:19:58.947452 | orchestrator | 2026-01-28 01:18:35 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260127.qcow2: 200 2026-01-28 01:19:58.947468 | orchestrator | 2026-01-28 01:18:35 | INFO  | Importing image OpenStack Octavia Amphora 2026-01-27 2026-01-28 01:19:58.947480 | orchestrator | 2026-01-28 01:18:35 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260127.qcow2 2026-01-28 01:19:58.947491 | orchestrator | 2026-01-28 01:18:37 | INFO  | Waiting for image to leave queued state... 2026-01-28 01:19:58.947501 | orchestrator | 2026-01-28 01:18:39 | INFO  | Waiting for import to complete... 2026-01-28 01:19:58.947511 | orchestrator | 2026-01-28 01:18:50 | INFO  | Waiting for import to complete... 2026-01-28 01:19:58.947521 | orchestrator | 2026-01-28 01:19:00 | INFO  | Waiting for import to complete... 2026-01-28 01:19:58.947531 | orchestrator | 2026-01-28 01:19:10 | INFO  | Waiting for import to complete... 2026-01-28 01:19:58.947543 | orchestrator | 2026-01-28 01:19:20 | INFO  | Waiting for import to complete... 2026-01-28 01:19:58.947554 | orchestrator | 2026-01-28 01:19:30 | INFO  | Waiting for import to complete... 2026-01-28 01:19:58.947564 | orchestrator | 2026-01-28 01:19:40 | INFO  | Waiting for import to complete... 2026-01-28 01:19:58.947574 | orchestrator | 2026-01-28 01:19:50 | INFO  | Waiting for image to leave queued state... 2026-01-28 01:19:58.947584 | orchestrator | 2026-01-28 01:19:52 | INFO  | Waiting for image to leave queued state... 2026-01-28 01:19:58.947619 | orchestrator | 2026-01-28 01:19:54 | INFO  | Waiting for image to leave queued state... 2026-01-28 01:19:58.947630 | orchestrator | 2026-01-28 01:19:56 | INFO  | Waiting for image to leave queued state... 2026-01-28 01:19:58.947640 | orchestrator | 2026-01-28 01:19:58 | ERROR  | Image OpenStack Octavia Amphora 2026-01-27 seems stuck in queued state 2026-01-28 01:19:58.947651 | orchestrator | 2026-01-28 01:19:58 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-01-28 01:19:58.947662 | orchestrator | 2026-01-28 01:19:58 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-01-28 01:19:58.947679 | orchestrator | 2026-01-28 01:19:58 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-01-28 01:19:58.947693 | orchestrator | 2026-01-28 01:19:58 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-01-28 01:19:58.947703 | orchestrator | 2026-01-28 01:19:58.947714 | orchestrator | ERROR: One or more errors occurred during the execution of the program, please check the output. 2026-01-28 01:19:59.639766 | orchestrator | ERROR 2026-01-28 01:19:59.640189 | orchestrator | { 2026-01-28 01:19:59.640298 | orchestrator | "delta": "0:03:19.421605", 2026-01-28 01:19:59.640409 | orchestrator | "end": "2026-01-28 01:19:59.259788", 2026-01-28 01:19:59.640470 | orchestrator | "msg": "non-zero return code", 2026-01-28 01:19:59.640527 | orchestrator | "rc": 1, 2026-01-28 01:19:59.640582 | orchestrator | "start": "2026-01-28 01:16:39.838183" 2026-01-28 01:19:59.640635 | orchestrator | } failure 2026-01-28 01:19:59.661452 | 2026-01-28 01:19:59.661779 | PLAY RECAP 2026-01-28 01:19:59.661890 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2026-01-28 01:19:59.662218 | 2026-01-28 01:19:59.924954 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-01-28 01:19:59.926113 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-28 01:20:00.689111 | 2026-01-28 01:20:00.689291 | PLAY [Post output play] 2026-01-28 01:20:00.706677 | 2026-01-28 01:20:00.706867 | LOOP [stage-output : Register sources] 2026-01-28 01:20:00.778494 | 2026-01-28 01:20:00.778870 | TASK [stage-output : Check sudo] 2026-01-28 01:20:01.700707 | orchestrator | sudo: a password is required 2026-01-28 01:20:01.819404 | orchestrator | ok: Runtime: 0:00:00.021369 2026-01-28 01:20:01.834952 | 2026-01-28 01:20:01.835114 | LOOP [stage-output : Set source and destination for files and folders] 2026-01-28 01:20:01.877404 | 2026-01-28 01:20:01.877718 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-01-28 01:20:01.947468 | orchestrator | ok 2026-01-28 01:20:01.957117 | 2026-01-28 01:20:01.957275 | LOOP [stage-output : Ensure target folders exist] 2026-01-28 01:20:02.427454 | orchestrator | ok: "docs" 2026-01-28 01:20:02.427711 | 2026-01-28 01:20:02.729268 | orchestrator | ok: "artifacts" 2026-01-28 01:20:03.012982 | orchestrator | ok: "logs" 2026-01-28 01:20:03.033177 | 2026-01-28 01:20:03.033511 | LOOP [stage-output : Copy files and folders to staging folder] 2026-01-28 01:20:03.060116 | 2026-01-28 01:20:03.060398 | TASK [stage-output : Make all log files readable] 2026-01-28 01:20:03.355813 | orchestrator | ok 2026-01-28 01:20:03.370178 | 2026-01-28 01:20:03.370407 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-01-28 01:20:03.406203 | orchestrator | skipping: Conditional result was False 2026-01-28 01:20:03.424201 | 2026-01-28 01:20:03.424425 | TASK [stage-output : Discover log files for compression] 2026-01-28 01:20:03.449560 | orchestrator | skipping: Conditional result was False 2026-01-28 01:20:03.462069 | 2026-01-28 01:20:03.462218 | LOOP [stage-output : Archive everything from logs] 2026-01-28 01:20:03.506161 | 2026-01-28 01:20:03.506356 | PLAY [Post cleanup play] 2026-01-28 01:20:03.514793 | 2026-01-28 01:20:03.514930 | TASK [Set cloud fact (Zuul deployment)] 2026-01-28 01:20:03.573160 | orchestrator | ok 2026-01-28 01:20:03.584172 | 2026-01-28 01:20:03.584299 | TASK [Set cloud fact (local deployment)] 2026-01-28 01:20:03.609212 | orchestrator | skipping: Conditional result was False 2026-01-28 01:20:03.622157 | 2026-01-28 01:20:03.622295 | TASK [Clean the cloud environment] 2026-01-28 01:20:06.128462 | orchestrator | 2026-01-28 01:20:06 - clean up servers 2026-01-28 01:20:06.999961 | orchestrator | 2026-01-28 01:20:06 - testbed-manager 2026-01-28 01:20:07.082696 | orchestrator | 2026-01-28 01:20:07 - testbed-node-0 2026-01-28 01:20:07.163130 | orchestrator | 2026-01-28 01:20:07 - testbed-node-2 2026-01-28 01:20:07.253855 | orchestrator | 2026-01-28 01:20:07 - testbed-node-1 2026-01-28 01:20:07.345829 | orchestrator | 2026-01-28 01:20:07 - testbed-node-4 2026-01-28 01:20:07.434806 | orchestrator | 2026-01-28 01:20:07 - testbed-node-3 2026-01-28 01:20:07.527766 | orchestrator | 2026-01-28 01:20:07 - testbed-node-5 2026-01-28 01:20:07.628121 | orchestrator | 2026-01-28 01:20:07 - clean up keypairs 2026-01-28 01:20:07.645824 | orchestrator | 2026-01-28 01:20:07 - testbed 2026-01-28 01:20:07.672968 | orchestrator | 2026-01-28 01:20:07 - wait for servers to be gone 2026-01-28 01:20:18.524360 | orchestrator | 2026-01-28 01:20:18 - clean up ports 2026-01-28 01:20:18.716006 | orchestrator | 2026-01-28 01:20:18 - 3fab9a24-7426-462b-922c-caf37cb8fe92 2026-01-28 01:20:19.000143 | orchestrator | 2026-01-28 01:20:18 - 69fc9f49-ff67-4d42-8244-b33fac5dd4b1 2026-01-28 01:20:19.251371 | orchestrator | 2026-01-28 01:20:19 - 842c58fb-bf6f-43e6-ad3b-8e85d68e0ac6 2026-01-28 01:20:19.444933 | orchestrator | 2026-01-28 01:20:19 - 99ffdf98-dac8-4509-8b8e-8483b0930af8 2026-01-28 01:20:19.700711 | orchestrator | 2026-01-28 01:20:19 - a13f6cc3-eb55-43e7-aaf3-67fd67eeee3b 2026-01-28 01:20:19.966862 | orchestrator | 2026-01-28 01:20:19 - af801ed9-344b-4e26-bc1b-b501f8a8ea2f 2026-01-28 01:20:20.355770 | orchestrator | 2026-01-28 01:20:20 - dbd34f11-1e7e-4a19-a004-491c4847f751 2026-01-28 01:20:20.574969 | orchestrator | 2026-01-28 01:20:20 - clean up volumes 2026-01-28 01:20:20.693777 | orchestrator | 2026-01-28 01:20:20 - testbed-volume-3-node-base 2026-01-28 01:20:20.734353 | orchestrator | 2026-01-28 01:20:20 - testbed-volume-2-node-base 2026-01-28 01:20:20.774757 | orchestrator | 2026-01-28 01:20:20 - testbed-volume-0-node-base 2026-01-28 01:20:20.816004 | orchestrator | 2026-01-28 01:20:20 - testbed-volume-5-node-base 2026-01-28 01:20:20.859607 | orchestrator | 2026-01-28 01:20:20 - testbed-volume-4-node-base 2026-01-28 01:20:20.900698 | orchestrator | 2026-01-28 01:20:20 - testbed-volume-1-node-base 2026-01-28 01:20:20.939645 | orchestrator | 2026-01-28 01:20:20 - testbed-volume-manager-base 2026-01-28 01:20:20.981828 | orchestrator | 2026-01-28 01:20:20 - testbed-volume-0-node-3 2026-01-28 01:20:21.022142 | orchestrator | 2026-01-28 01:20:21 - testbed-volume-4-node-4 2026-01-28 01:20:21.063897 | orchestrator | 2026-01-28 01:20:21 - testbed-volume-7-node-4 2026-01-28 01:20:21.126373 | orchestrator | 2026-01-28 01:20:21 - testbed-volume-6-node-3 2026-01-28 01:20:21.167723 | orchestrator | 2026-01-28 01:20:21 - testbed-volume-3-node-3 2026-01-28 01:20:21.210856 | orchestrator | 2026-01-28 01:20:21 - testbed-volume-5-node-5 2026-01-28 01:20:21.252771 | orchestrator | 2026-01-28 01:20:21 - testbed-volume-1-node-4 2026-01-28 01:20:21.292643 | orchestrator | 2026-01-28 01:20:21 - testbed-volume-8-node-5 2026-01-28 01:20:21.334137 | orchestrator | 2026-01-28 01:20:21 - testbed-volume-2-node-5 2026-01-28 01:20:21.376622 | orchestrator | 2026-01-28 01:20:21 - disconnect routers 2026-01-28 01:20:21.507429 | orchestrator | 2026-01-28 01:20:21 - testbed 2026-01-28 01:20:22.450631 | orchestrator | 2026-01-28 01:20:22 - clean up subnets 2026-01-28 01:20:22.497607 | orchestrator | 2026-01-28 01:20:22 - subnet-testbed-management 2026-01-28 01:20:23.153090 | orchestrator | 2026-01-28 01:20:23 - clean up networks 2026-01-28 01:20:23.316193 | orchestrator | 2026-01-28 01:20:23 - net-testbed-management 2026-01-28 01:20:23.628115 | orchestrator | 2026-01-28 01:20:23 - clean up security groups 2026-01-28 01:20:23.673048 | orchestrator | 2026-01-28 01:20:23 - testbed-management 2026-01-28 01:20:23.788418 | orchestrator | 2026-01-28 01:20:23 - testbed-node 2026-01-28 01:20:23.894606 | orchestrator | 2026-01-28 01:20:23 - clean up floating ips 2026-01-28 01:20:23.929476 | orchestrator | 2026-01-28 01:20:23 - 81.163.192.115 2026-01-28 01:20:24.319908 | orchestrator | 2026-01-28 01:20:24 - clean up routers 2026-01-28 01:20:24.453721 | orchestrator | 2026-01-28 01:20:24 - testbed 2026-01-28 01:20:26.185836 | orchestrator | ok: Runtime: 0:00:21.933826 2026-01-28 01:20:26.190977 | 2026-01-28 01:20:26.191169 | PLAY RECAP 2026-01-28 01:20:26.191399 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-01-28 01:20:26.191482 | 2026-01-28 01:20:26.339007 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-01-28 01:20:26.340095 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-28 01:20:27.107140 | 2026-01-28 01:20:27.107307 | PLAY [Cleanup play] 2026-01-28 01:20:27.123705 | 2026-01-28 01:20:27.123839 | TASK [Set cloud fact (Zuul deployment)] 2026-01-28 01:20:27.184555 | orchestrator | ok 2026-01-28 01:20:27.191286 | 2026-01-28 01:20:27.191457 | TASK [Set cloud fact (local deployment)] 2026-01-28 01:20:27.225277 | orchestrator | skipping: Conditional result was False 2026-01-28 01:20:27.235072 | 2026-01-28 01:20:27.235178 | TASK [Clean the cloud environment] 2026-01-28 01:20:28.414470 | orchestrator | 2026-01-28 01:20:28 - clean up servers 2026-01-28 01:20:28.989090 | orchestrator | 2026-01-28 01:20:28 - clean up keypairs 2026-01-28 01:20:29.009899 | orchestrator | 2026-01-28 01:20:29 - wait for servers to be gone 2026-01-28 01:20:29.062954 | orchestrator | 2026-01-28 01:20:29 - clean up ports 2026-01-28 01:20:29.152500 | orchestrator | 2026-01-28 01:20:29 - clean up volumes 2026-01-28 01:20:29.218999 | orchestrator | 2026-01-28 01:20:29 - disconnect routers 2026-01-28 01:20:29.247878 | orchestrator | 2026-01-28 01:20:29 - clean up subnets 2026-01-28 01:20:29.271227 | orchestrator | 2026-01-28 01:20:29 - clean up networks 2026-01-28 01:20:29.434620 | orchestrator | 2026-01-28 01:20:29 - clean up security groups 2026-01-28 01:20:29.470657 | orchestrator | 2026-01-28 01:20:29 - clean up floating ips 2026-01-28 01:20:29.501286 | orchestrator | 2026-01-28 01:20:29 - clean up routers 2026-01-28 01:20:29.771956 | orchestrator | ok: Runtime: 0:00:01.496175 2026-01-28 01:20:29.775895 | 2026-01-28 01:20:29.776054 | PLAY RECAP 2026-01-28 01:20:29.776180 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-01-28 01:20:29.776243 | 2026-01-28 01:20:29.898746 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-01-28 01:20:29.901074 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-28 01:20:30.637609 | 2026-01-28 01:20:30.637771 | PLAY [Base post-fetch] 2026-01-28 01:20:30.653410 | 2026-01-28 01:20:30.653546 | TASK [fetch-output : Set log path for multiple nodes] 2026-01-28 01:20:30.719682 | orchestrator | skipping: Conditional result was False 2026-01-28 01:20:30.736016 | 2026-01-28 01:20:30.736231 | TASK [fetch-output : Set log path for single node] 2026-01-28 01:20:30.797886 | orchestrator | ok 2026-01-28 01:20:30.812677 | 2026-01-28 01:20:30.812922 | LOOP [fetch-output : Ensure local output dirs] 2026-01-28 01:20:31.335286 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/659fa68aa70e4b8f8d01f23a210e331e/work/logs" 2026-01-28 01:20:31.625844 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/659fa68aa70e4b8f8d01f23a210e331e/work/artifacts" 2026-01-28 01:20:31.908470 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/659fa68aa70e4b8f8d01f23a210e331e/work/docs" 2026-01-28 01:20:31.933189 | 2026-01-28 01:20:31.933412 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-01-28 01:20:32.884087 | orchestrator | changed: .d..t...... ./ 2026-01-28 01:20:32.884389 | orchestrator | changed: All items complete 2026-01-28 01:20:32.884441 | 2026-01-28 01:20:33.577869 | orchestrator | changed: .d..t...... ./ 2026-01-28 01:20:34.323480 | orchestrator | changed: .d..t...... ./ 2026-01-28 01:20:34.349734 | 2026-01-28 01:20:34.349884 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-01-28 01:20:34.390974 | orchestrator | skipping: Conditional result was False 2026-01-28 01:20:34.394185 | orchestrator | skipping: Conditional result was False 2026-01-28 01:20:34.412258 | 2026-01-28 01:20:34.412430 | PLAY RECAP 2026-01-28 01:20:34.412515 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-01-28 01:20:34.412623 | 2026-01-28 01:20:34.542223 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-01-28 01:20:34.544754 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-28 01:20:35.313616 | 2026-01-28 01:20:35.313788 | PLAY [Base post] 2026-01-28 01:20:35.328747 | 2026-01-28 01:20:35.328896 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-01-28 01:20:37.100539 | orchestrator | changed 2026-01-28 01:20:37.111417 | 2026-01-28 01:20:37.111548 | PLAY RECAP 2026-01-28 01:20:37.111626 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-01-28 01:20:37.111706 | 2026-01-28 01:20:37.226274 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-01-28 01:20:37.229076 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-01-28 01:20:38.007229 | 2026-01-28 01:20:38.007422 | PLAY [Base post-logs] 2026-01-28 01:20:38.018024 | 2026-01-28 01:20:38.018168 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-01-28 01:20:38.473607 | localhost | changed 2026-01-28 01:20:38.489495 | 2026-01-28 01:20:38.489685 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-01-28 01:20:38.528978 | localhost | ok 2026-01-28 01:20:38.536134 | 2026-01-28 01:20:38.536343 | TASK [Set zuul-log-path fact] 2026-01-28 01:20:38.565957 | localhost | ok 2026-01-28 01:20:38.580806 | 2026-01-28 01:20:38.580971 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-01-28 01:20:38.617611 | localhost | ok 2026-01-28 01:20:38.621468 | 2026-01-28 01:20:38.621595 | TASK [upload-logs : Create log directories] 2026-01-28 01:20:39.147303 | localhost | changed 2026-01-28 01:20:39.150160 | 2026-01-28 01:20:39.150272 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-01-28 01:20:39.671802 | localhost -> localhost | ok: Runtime: 0:00:00.007050 2026-01-28 01:20:39.676192 | 2026-01-28 01:20:39.676311 | TASK [upload-logs : Upload logs to log server] 2026-01-28 01:20:40.233192 | localhost | Output suppressed because no_log was given 2026-01-28 01:20:40.237170 | 2026-01-28 01:20:40.237414 | LOOP [upload-logs : Compress console log and json output] 2026-01-28 01:20:40.299955 | localhost | skipping: Conditional result was False 2026-01-28 01:20:40.305013 | localhost | skipping: Conditional result was False 2026-01-28 01:20:40.319222 | 2026-01-28 01:20:40.319518 | LOOP [upload-logs : Upload compressed console log and json output] 2026-01-28 01:20:40.371134 | localhost | skipping: Conditional result was False 2026-01-28 01:20:40.371797 | 2026-01-28 01:20:40.375583 | localhost | skipping: Conditional result was False 2026-01-28 01:20:40.382260 | 2026-01-28 01:20:40.382471 | LOOP [upload-logs : Upload console log and json output]